SlideShare a Scribd company logo
1 of 55
Download to read offline
David Pasek – dpasek@vmware.com
VMWARE - TAM Program
SPECTRE AND MELTDOWN
PERFORMANCE IMPACT TESTS
March 14, 2018, Document version: 0.3
Purpose of this test plan
The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel
CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched
and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive
workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies
across different hardware and software configurations. However, performed tests are very
well described in this document so the reader can understand all conditions of the test and
observed results. The reader can also perform tests on his specific hardware and software
configurations.
Specifications
Hypervisors (ESXi) Hardware and Software Specifications
ESX01
ESX01 - without any Spectre Patches
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
ESX02
ESX02 - with Spectre Patches for Hypervisor-Specific Remediation
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
ESX03
ESX03 - with Spectre Patches for Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
VM Hardware and Software Specifications
MS-Windows
MS-VM01
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
• IP address: 192.168.5.36
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM02
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
• IP address: 192.168.5.37
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM11
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB
4056898)
• IP address: 192.168.5.46
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM12
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB
4056898)
• IP address: 192.168.5.47
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
Linux
LIN-VM01
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• OS – Centos 7 – without Spectre/Meltdown updates
▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.31
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
LIN-VM02
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – Centos 7 – without Spectre/Meltdown updates
▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.32
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
LIN-VM11
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – Centos 7 – with Spectre/Meltdown updates
▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.41
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
LIN-VM12
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – Centos 7 – with Spectre/Meltdown updates
▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.42
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
Performance Testing tools
CPU-Z - https://www.cpuid.com/softwares/cpu-z.html
Download: https://www.cpuid.com/downloads/cpu-z/cpu-z_1.83-en.exe
CPU-Z is a freeware that gathers information on some of the main devices of your system
IOMETER - http://www.iometer.org/
Download: http://www.iometer.org/doc/downloads.html
IOmeter is an I/O subsystem measurement and characterization tool for single and clustered
systems.
NUTTCP
Install:
RedHat 7: yum install --enablerepo=Unsupported_EPEL nuttcp
CentOS 7: yum install epel-release nuttcp
MS-Windows: http://nuttcp.net/nuttcp/latest/binaries/nuttcp-8.1.4.win64.zip
TTCP (Test TCP) as client/server network performance measurement tool.
Usage …
Server part is started by following command
nuttcp -S -N 100
Client part is started by following command
cat /dev/zero | nuttcp -t -s -N 100 czchoapint092
Other nuttcp examples:
Server and Client
nuttcp -r -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -s -N 20 -P 5000 czchoapint094
Larger buffers
nuttcp -r -l 8972 -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -l 8972 -s -N 20 -P 5000 czchoapint094
UDP traffic
nuttcp -r -u -l 8972 -w4m -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -u -l 8972 -w4m -s -N 20 -P 5000 czchoapint094
REDIS - https://redis.io/
Install:
CentOS 7: yum install redis
MS-Windows: https://dingyuliang.me/redis-3-2-install-redis-windows/
Download: https://github.com/MicrosoftArchive/redis/releases/download/win-
3.2.100/Redis-x64-3.2.100.zip
Redis is an open source (BSD licensed), in-memory data structure store, used as a database,
cache and message broker. It supports data structures such as strings, hashes, lists, sets,
sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius
queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different
levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic
partitioning with Redis Cluster.
Spectre/Meldown OS remediations
ESXi
Use VMware Update Manager and patches based on VMSA-2018-02 and VMSA-2018-04.
MS-Windows
To protect MS-Windows apply updates available here
http://www.catalog.update.microsoft.com/Search.aspx?q=KB4056898
To enable the fix change Registry Settings
reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession
ManagerMemory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /f
reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession
ManagerMemory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f
reg add "HKLMSOFTWAREMicrosoftWindows NTCurrentVersionVirtualization" /v
MinVmVersionForCpuBasedMitigations /t REG_SZ /d "1.0" /f
Restart the server for changes to take effect.
Linux / Centos
Use “yum update” and apply the latest OS updates.
Spectre/Meldown remediation checkers
ESXi
ESXi command to get information if microcode is updated …
if [ `vsish -e get /hardware/msr/pcpu/0/addr/0x00000048 2&>1 > /dev/null ;echo $?` -eq 0 ];
then echo -e "nIntel Security Microcode Updatedn";else echo -e "nIntel Security Microcode
NOT Updatedn";fi
MS-Windows
MS-Windows test tool for SPECTRE/MELTDOWN remediation
Installation
• Article: https://support.microsoft.com/en-us/help/4073119/protect-against-
speculative-execution-side-channel-vulnerabilities-in
• PowerShell 5.0 is required
Install-Module SpeculationControl
Vulnarability Check (PowerShell command)
Set-ExecutionPolicy RemoteSigned -Scope Currentuser
Get-SpeculationControlSettings
Linux / Centos
Linux test tool for SPECTRE/MELTDOWN remediation
Installation
• Blog: https://www.cyberciti.biz/faq/check-linux-server-for-spectre-meltdown-
vulnerability/
• Tool:
cd /root
wget –O spectre-meltdown-checker.sh
https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre-
meltdown-checker.sh
chmod 755 ./spectre-meltdown-checker.sh
Vulnarability Check (Shell command)
/root/spectre-meltdown-checker.sh
Spectre/Meldown remediation status of VMs on ESXi hosts
MS-Windows
MS-VM01 on ESX01
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
MS-VM01 on ESX02
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
MS-VM01 on ESX03
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
MS-VM02 on ESX01
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
MS-VM02 on ESX02
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
MS-VM02 on ESX03
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
MS-VM11 on ESX01
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
MS-VM11 on ESX02
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
MS-VM11 on ESX03
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
MS-VM12 on ESX01
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
MS-VM12 on ESX02
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
MS-VM12 on ESX03
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
Linux / Centos
LIN-VM01 on ESX01
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
LIN-VM01 on ESX02
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
LIN-VM01 on ESX03
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
LIN-VM02 on ESX01
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
LIN-VM02 on ESX02
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
LIN-VM02 on ESX03
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
LIN-VM11 on ESX01
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
LIN-VM11 on ESX02
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
LIN-VM11 on ESX03
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
LIN-VM12 on ESX01
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
LIN-VM12 on ESX02
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
LIN-VM12 on ESX03
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
Performance tests
MS Windows OS
CPU performance (Win/CPU-Z) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area CPU
Test name CPU performance (Win/CPU-Z) of VM on top of ESXi host
Test description Verification of Spectre/Meltdown security patches impact on CPU
performance
Tasks
Step 1/ Generate CPU workload leveraging CPU-Z benchmaring tool.
Run CPU-Z on MS-VM
Step 2/ Note CPU performance (CPU-Z benchmark single thread and
multi thread)
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower CPU performance on systems with security patches.
Test tools: CPU-Z
Test result: passed
Test notes:
Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
MS-VM01
Single Thread:
233.9
235.4
237.1
236.7
236.7
AVG = 236.3
Multi Thread:
624.3
624.5
624.2
621.8
624.9
AVG = 624.3
Single Thread:
236.0
233.8
233.3
233.4
233.9
AVG = 233.7
Multi Thread:
616.6
623.4
622.4
624.6
624.1
AVG = 623.3
Single Thread:
233.8
253.6
233.0
234.6
235.1
AVG = 234.5
Multi Thread:
623.7
623.3
624.5
625.0
623.8
AVG = 624
OS with security
patches
MS-VM11
Single Thread:
234.5
235.3
232.0
225.8
236.5
AVG = 231.9
Multi Thread:
622.2
619.1
621.0
612.8
621.2
AVG = 620.4
Single Thread:
232.1
234.3
233.4
234.7
234.1
AVG = 233.9
Multi Thread:
622.2
623.6
621.0
620.9
622.3
AVG = 621.8
Single Thread:
208.1
207.4
202.9
206.1
209.7
AVG = 207.2
Multi Thread:
604.7
597.5
602.3
609.5
610.6
AVG = 605.5
Storage performance (Win/IOmeter) – Single VM storage performance to local disk
Verification type Design
Test type Performance
Tested area Storage
Test name Storage performance (Win/IOmeter) – single VM storage
performance to local disk
Test description Verification of CPU performance impact to storage performance
Tasks
Step 1/ Run IOmeter GUI on MS-VM01.
Step 2/ Run disk IO testing tools (VM01 with IOmeter GUI and
dynamo) and generate load to disk on shared storage.
I/O workload patterns for tests
• 512B, 100% Random, 50% Write
• 64kB, 100% Random, 50% Write
Multi threading configurations
• 4 Workers / 1 Outstanding IO
Disk Size 10GB (20000000 sectors)
Step 3/ Note storage performance (I/O per second = IOPS), data
throughput(MB/s), response time (ms) and CPU load
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 and MS-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 and MS-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 and MS-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower storage performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
Test results – I/O size 512B
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
MS-VM01
IOPS:
39424
39391
39652
39582
39839
MB/s:
20.19
20.17
20.30
20.27
20.40
RT (ms):
0.1007
0.1008
0.1001
0.1003
0.0996
CPU load (%):
32.32
33.20
34.71
34.90
33.60
AVG MB/s = 20.25
IOPS:
38829
39023
38600
38847
39521
MB/s:
19.88
19.98
19.76
19.89
20.24
RT (ms):
0.1021
0.1016
0.1027
0.1021
0.1003
CPU load (%):
32.79
32.91
35.35
35.49
34.49
AVG MB/s = 19.91
IOPS:
39596
39459
38660
38953
39775
MB/s:
20.27
20.20
19.79
19.94
20.36
RT (ms):
0.1003
0.1006
0.1027
0.1019
0.0998
CPU load (%):
32.12
33.13
35.16
35.17
34.53
AVG MB/s = 20.13
OS with security
patches
MS-VM11
IOPS:
37541
38074
37899
38188
38096
MB/s:
19.22
19.49
19.40
19.55
19.51
RT (ms):
0.1057
IOPS:
34076
37007
37832
32789
33052
MB/s:
17.45
18.95
19.37
16.79
16.92
RT (ms):
0.1164
IOPS:
30667
30722
30839
30179
29600
MB/s:
15.7
15.73
15.79
15.45
15.16
RT (ms):
0.1294
0.1042
0.1047
0.1039
0.1042
CPU load (%):
35.21
36.73
36.39
36.10
37.36
AVG MB/s = 19.47
0.1072
0.1048
0.1210
0.1200
CPU load (%):
60.28
44.48
37.29
61.28
61.60
AVG MB/s = 17.77
0.1292
0.1287
0.1316
0.1341
CPU load (%):
40.92
41.93
40.71
40.27
41.74
AVG MB/s = 15.63
Test results – I/O size 64kB
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
MS-VM01
IOPS:
7638
7566
7619
7600
7704
MB/s:
500.62
495.87
499.34
498.12
504.90
RT (ms):
0.5228
0.5278
0.5241
0.5254
0.5184
CPU load (%):
7.69
7.74
7.99
7.67
7.42
IOPS:
7593
7533
7619
7655
7620
MB/s:
497.65
493.70
499.36
501.74
499.39
RT (ms):
0.5257
0.5299
0.5239
0.5214
0.5239
CPU load (%):
8.26
7.70
8.12
8.40
8.01
IOPS:
7651
7672
7644
7672
7455
MB/s:
501.48
502.84
501.00
502.80
488.61
RT (ms):
0.5220
0.5205
0.5224
0.5205
0.5357
CPU load (%):
8.31
7.71
8.02
8.45
8.39
AVG MB/s 499.36 AVG MB/s 498.8 AVG MB/s 501.76
OS with security
patches
MS-VM11
IOPS:
7607
7651
7612
7568
7603
MB/s:
498.59
501.44
498.90
495.98
498.31
RT (ms):
0.5248
0.5219
0.5245
0.5276
0.5251
CPU load (%):
9.25
9.55
9.19
9.29
9.55
AVG MB/s =
497.82
IOPS:
7599
7662
7593
7596
7657
MB/s:
497.76
502.20
497.62
497.86
501.83
RT (ms):
0.5257
0.5211
0.5257
0.5255
0.5213
CPU load (%):
9.40
9.94
8.88
8.71
9.49
AVG MB/s =
499.15
IOPS:
7541
7575
7582
7558
7556
MB/s:
494.25
496.45
496.92
495.36
495.24
RT (ms):
0.5294
0.5270
0.5265
0.5281
0.5282
CPU load (%):
12.98
12.18
12.93
13.09
12.65
AVG MB/s =
495.68
Network performance (Win/IOmeter) between two VMs within the same ESXi host
Verification type Design
Test type Performance
Tested area Network
Test name Network performance (Win/IOmeter) between two VMs within the
same ESXi host
Test description Verification of CPU performance impact to network performance
Tasks
Step 1/ Run IOmeter GUI on MS-VM01.
Step 2/ Remove all storage workers
Step 3/ Run IOmeter dynamo on MS-VM02 connected to IOmeter host
<hostname of VM01> … dynamo.exe –i MS-VM01 –m MS-VM02
Step 4/ Create 8 network workers. Assign specification I/O Size 512B,
100% Read to all network workers. Set test duration 30 seconds.
Step 5/ Generate network workload between two MS-VM’s on the
same ESXi host
Step 6/ Note network performance (packets per second), throughput
(MB/s), Response Time (ms) and CPU load (%)
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 and MS-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 and MS-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 and MS-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower network performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
MS-VM01
MS-VM02
PPS:
828726
925343
944935
935526
855468
MB/s:
424.31
473.78
483.81
478.99
438.00
RT (ms):
0.0191
0.0170
0.0167
0.0168
0.0184
CPU load (%):
56.76
60.22
59.97
60.31
58.95
AVG MB/s
463.59
PPS:
748945
896333
876467
836831
927155
MB/s:
383.46
458.92
448.75
428.46
474.70
RT (ms):
0.0211
0.0176
0.0180
0.0189
0.0170
CPU load (%):
57.18
58.88
59.98
60.53
58.93
AVG MB/s
445.37
PPS:
708065
843054
846919
803661
844828
MB/s:
362.53
431.64
433.62
411.47
432.55
RT (ms):
0.0223
0.0187
0.0186
0.0196
0.0187
CPU load (%):
56.95
58.67
58.22
60.49
57.84
AVG MB/s
425.22
OS with security
patches
MS-VM11
MS-VM12
PPS:
825140
921157
844023
919035
874026
MB/s:
422.47
471.63
432.14
470.55
447.50
RT (ms):
PPS:
690623
885057
735112
717807
784910
MB/s:
353.60
453.15
376.38
367.52
401.87
RT (ms):
PPS:
405112
432572
447930
390631
409398
MB/s:
207.42
221.48
229.34
200.00
209.61
RT (ms):
0.0192
0.0171
0.0187
0.0172
0.0181
CPU load (%):
60.04
59.32
59.68
59.48
58.17
AVG MB/s
450.01
0.0229
0.0178
0.0215
0.0220
0.0201
CPU load (%):
55.42
58.90
65.13
61.11
62.20
AVG MB/s
381.92
0.0390
0.0365
0.0353
0.0405
0.0387
CPU load (%):
64.71
67.31
67.34
63.33
64.43
AVG MB/s
212.83
In-Memory database performance (Win/Redis) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area Database
Test name Database performance from VM to In-Memory DB (Redis)
Test description Verification of CPU performance impact to in-memory database
performance
Tasks
Step 1/ Install and Run Redis DB on WIN-VM01
Step 2/ Run redis-benchmark
redis-benchmark -t get,set –n 1000000 -c 8
Step 3/ Note DB performance (transactions per second) and CPU load
(%)
Test combinations
• ESXi host without security patches – OS without security
patches (VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower memory performance on systems with security patches.
Test tools: RedisDB
Test result: passed
Test notes:
Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
WIN-VM01
TPS set,get:
139024, 148676
139801, 148389
143781, 150262
141262, 150897
139140, 150150
AVG:
140067, 149696
TPS set,get:
142247, 149566
136967, 146929
140627, 146950
142389, 148500
140666, 149009
AVG:
141180, 148153
TPS set,get:
142450, 145560
144009, 145751
145095, 148456
142511, 147907
144843, 148831
AVG:
143787, 147371
OS with security
patches
WIN-VM11
TPS set,get:
140528, 146049
142045, 146113
141362, 144885
142959, 142795
142836, 145921
AVG:
142081, 145618
TPS set,get:
139024, 144237
140449, 147232
138159, 144927
140193, 145836
140114, 147015
AVG:
139777, 145926
TPS set,get:
83326, 83430
82399, 83977
80457, 83885
82399, 84495
82795, 86140
AVG:
82531, 84119
Linux OS
Network performance (Linux/NUTTCP) between two VMs within the same ESXi host
Verification type Design
Test type Performance
Tested area Network
Test name Network performance (Linux/NUTTCP) between two VMs within
the same ESXi host
Test description Verification of CPU performance impact to network performance
Tasks
Step 1/ Run
nuttcp -r -S -P 5501
nuttcp -r -S -P 5502
nuttcp -r -S -P 5503
nuttcp -r -S -P 5504
nuttcp -r -S -P 5505
nuttcp -r -S -P 5506
nuttcp -r -S -P 5507
nuttcp -r -S -P 5508
on LIN-VM01.
Step 2/ Run
iftop -F 192.168.4.32/32
on LIN-VM01 to monitor traffic
Step 2/ Change IP address bellow to VM01 and run script (/tmp/run.sh)
#!/bin/bash
PORT_START=5501
LOGDIR="/tmp"
IP="192.168.4.31"
for i in `seq 1 8`;
do
echo "Process $i"
port=$(expr $PORT_START + $i - 1)
echo " port $port"
logfile="$LOGDIR/job$i.log"
echo " logfile $logfile"
echo " target IP address $IP"
( /usr/bin/nuttcp -t -b -P $port -T 30 $IP > $logfile ) &
sleep 0.1
done
on LIN-VM02 to generate workload.
Step 4/ Note network throughput (Mbps) of each process and calculate
sum.
SHOW RESULTS: cat /tmp/job*
SUM: cat /tmp/job* | cut -c29-38 | paste -s -d+ | bc
Test combinations
• ESXi host without security patches – OS without security
patches (LIN-VM01 and LIN-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(LIN-VM11 and LIN-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (LIN-VM01 and LIN-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (LIN-VM11 and LIN-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(LIN-VM11 and LIN-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower network performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
LIN-VM01
LIN-VM02
Mbps:
10497.4184
10625.7293
10290.1794
10048.5660
9479.2741
AVG: 10278.7213
Mbps:
10421.9657
10157.4198
10673.1855
10052.0098
10610.0493
AVG: 10396.4783
Mbps:
9338.4298
9395.8587
10205.4507
9680.9801
8942.6938
AVG: 9471.7562
OS with security
patches
LIN-VM11
LIN-VM12
Mbps:
9592.9527
9761.0624
10655.8847
10283.9328
9630.8711
AVG: 9891.9554
Mbps:
10626.2253
10390.4692
9941.6684
10011.0204
10373.6655
AVG: 10258.2286
Mbps:
7794.9779
8703.3505
8383.7530
7298.3641
7165.8537
AVG: 7825.6983
In-Memory database performance (Linux/Redis) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area Database
Test name Database performance from VM to In-Memory DB (Redis)
Test description Verification of CPU performance impact to in-memory database
performance
Tasks
Step 1/ Install and Run Redis DB on LIN-VM01 (Linux or FreeBSD
OS are required)
Step 2/ Run redis-benchmark
redis-benchmark -t get,set –n 1000000 -c 8
Step 3/ Note DB performance (transactions per second) and CPU load
(%)
Test combinations
• ESXi host without security patches – OS without security
patches (VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower memory performance on systems with security patches.
Test tools: RedisDB
Test result: passed
Test notes:
Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
LIN-VM01
TPS set,get:
148323, 148964
156764, 156274
156519, 152392
157529, 150105
156617, 147645
AVG:
156633, 150487
TPS set,get:
152951, 146498
153162, 150761
154798, 151492
153562, 150082
161759, 158152
AVG:
153840, 149583
TPS set,get:
155014, 154535
159235, 144948
149947, 153209
155908, 156274
148765, 155110
AVG:
153623, 154284
OS with security
patches
LIN-VM11
TPS set,get:
106860, 105674
108742, 103896
105697, 117882
104242, 104482
105842, 104253
AVG:
106133, 104803
TPS set,get:
104471, 106224
105808, 109218
106269, 104931
102333, 105529
116918, 107723
AVG:
105516, 106492
TPS set,get:
43903, 43929
43903, 43975
44062, 43821
43869, 43635
44062, 43780
AVG:
43956, 43843
Findings
CPU Performance on MS Windows
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
Single Thread:
236.3
Multi Thread:
624.3
Single Thread:
233.7
Multi Thread:
623.3
Single Thread:
234.5
Multi Thread:
624
MS Windows 2012
R2 with security
patches
Single Thread:
231.9
Multi Thread:
620.4
Single Thread:
233.9
Multi Thread:
621.8
Single Thread:
207.2
Multi Thread:
605.5
Secured system performance impact
CPU Single Thread ~ -12%
<< this is probably because ESXi hardware has just a 2 CPU cores (Intel NUC) and
ESXi VMkernel is probably using more CPU resources on CPU core 0. Such
performance impact was not observed on enterprise server hardware where the
impact in single CPU thread was negligible.
CPU Multi Thread ~ -3%
Storage performance (Win/IOmeter) – I/O size 512B
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
20.25 MB/s
39550 IOPS
19.91 MB/s
38887 IOPS
20.13 MB/s
39316 IOPS
MS Windows 2012
R2 with security
patches
19.47 MB/s
38027 IOPS
17.77 MB/s
34707 IOPS
15.63 MB/s
30527 IOPS
Secured system storage performance impact is ~ -23%
Storage performance (Win/IOmeter) – I/O size 64kB
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
499.36 MB/s
7619 IOPS
498.8 MB/s
7611 IOPS
501.76 MB/s
7656 IOPS
MS Windows 2012
R2 with security
patches
497.82 MB/s
7596 IOPS
499.15 MB/s
7616 IOPS
495.68 MB/s
7563 IOPS
Secured system performance impact is ~ 1% which is negligible. In other words, for
larger I/O size negative performance impact has not been observed.
Network performance (Win/IOmeter) – I/O size 512B
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
463.59 MB/s 445.37 MB/s 425.22 MB/s
MS Windows 2012
R2 with security
patches
450.01 MB/s 381.92 MB/s 212.83 MB/s
Secured system network performance impact is ~ -54%
<< Even bigger impact (~ 60%) was observed on enterprise server hardware
In-Memory database performance (Win/Redis)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
TPS set,get:
140067, 149696
TPS set,get:
141180, 148153
TPS set,get:
143787, 147371
MS Windows 2012
R2 with security
patches
TPS set,get:
142081, 145618
TPS set,get:
139777, 145926
TPS set,get:
82531, 84119
Secured system memory performance impact is ~ -42%
<< Similar impact (~ 40%) for set transaction but even bigger impact (~ 50%) was
observed for get transaction on enterprise server hardware
Network performance (Linux/NUTTCP)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
CentOS 7 without
security patches
10278.72 Mbps 10396.48 Mbps 9471.76 Mbps
CentOS 7 with
security patches
9891.96 Mbps 10258.23 Mbps 7825.7 Mbps
Secured system network performance impact is ~ -24%
<< Less impact (~ 9%) was observed on enterprise server hardware
In-Memory database performance (Linux/Redis)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
CentOS 7 without
security patches
TPS set,get:
156633, 150487
TPS set,get:
153840, 149583
TPS set,get:
153623, 154284
CentOS 7 with
security patches
TPS set,get:
106133, 104803
TPS set,get:
105516, 106492
TPS set,get:
43956, 43843
Secured system memory performance impact is ~ -70%
<< Similar impact was observed on enterprise server hardware
Conclusion
Qualification of performance is very specific and hard subject. The performance impact varies
across different hardware and software configurations. However, performed tests are very
well described in this document so the reader can understand all conditions of the test and
observed results. The reader can also perform tests on his specific hardware and software
configurations.
Tests in this document are focused on CPU, Memory, Storage and Network. It is worth to
mention that these tests are synthetic created to test the impact on specific infrastructure
component. Real workloads are usually mix of CPU, Memory, Storage and Network,
therefore the impact is the combination of extreme impacts of these synthetic tests.
The performance impact of VMware ESXi patches
We did not observe performance penalty after application of ESXi patches (Hypervisor-
Specific and Hypervisor-Assisted Guest Remediation security patches). The performance
penalty on CPU, Memory and Storage was observed after application of security patches in
to Guest Operating Systems and CPU Microcode. The only exception are Network
performance tests where we have observed up to 8% performance penalty after application of
ESXi patches even the Guest OS was still unpatched.
The performance impact of GuestOS and CPU Microcode patches
After application of all security remediation for Windows 2012 R2 and ESXi 6.5 we have
observed following performance impacts
• CPU
o ~ 12% negative performance impact on single thread CPU performance
o ~ 3% (negligible) negative performance impact on multi thread CPU
performance
• Memory
o ~ 42% negative performance impact on memory performance
• Storage
o ~ 23% negative performance impact on storage performance with small I/O
size (512B)
o No performance impact on storage performance with 64kB I/O size
• Network
o ~ 54% negative performance impact on network performance impact with
small I/O size (512B)
After application of all security remediation for CentOS 7 and ESXi 6.5 we have observed
following performance impacts
• Memory
o ~ 70% negative performance impact on memory performance
• Network
o ~ 24% negative performance impact on network performance

More Related Content

What's hot

Server Hardening Primer - Eric Vanderburg - JURINNOV
Server Hardening Primer - Eric Vanderburg - JURINNOVServer Hardening Primer - Eric Vanderburg - JURINNOV
Server Hardening Primer - Eric Vanderburg - JURINNOVEric Vanderburg
 
Windows guest debugging presentation from KVM Forum 2012
Windows guest debugging presentation from KVM Forum 2012Windows guest debugging presentation from KVM Forum 2012
Windows guest debugging presentation from KVM Forum 2012Yan Vugenfirer
 
Known basic of NFV Features
Known basic of NFV FeaturesKnown basic of NFV Features
Known basic of NFV FeaturesRaul Leite
 
High performance content hosting
High performance content hosting High performance content hosting
High performance content hosting Aleksey Korzun
 
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...Yan Vugenfirer
 
OSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install EnvironmentOSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install EnvironmentNETWAYS
 
VM Forking and Hypervisor-based Fuzzing with Xen
VM Forking and Hypervisor-based Fuzzing with XenVM Forking and Hypervisor-based Fuzzing with Xen
VM Forking and Hypervisor-based Fuzzing with XenTamas K Lengyel
 
Debugging linux kernel tools and techniques
Debugging linux kernel tools and  techniquesDebugging linux kernel tools and  techniques
Debugging linux kernel tools and techniquesSatpal Parmar
 
Android 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation reportAndroid 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation reporthidenorly
 
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)KostiantynKostiuk
 
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicHow to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicCircling Cycle
 
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...Yan Vugenfirer
 
MikroTik User Guide
MikroTik User GuideMikroTik User Guide
MikroTik User Guideseolangit4
 
Advanced NDISTest options
Advanced NDISTest optionsAdvanced NDISTest options
Advanced NDISTest optionsYan Vugenfirer
 
VM Forking and Hypervisor-based fuzzing
VM Forking and Hypervisor-based fuzzingVM Forking and Hypervisor-based fuzzing
VM Forking and Hypervisor-based fuzzingTamas K Lengyel
 
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...The Linux Foundation
 
XPDS16: CPUID handling for guests - Andrew Cooper, Citrix
XPDS16:  CPUID handling for guests - Andrew Cooper, CitrixXPDS16:  CPUID handling for guests - Andrew Cooper, Citrix
XPDS16: CPUID handling for guests - Andrew Cooper, CitrixThe Linux Foundation
 

What's hot (20)

Server Hardening Primer - Eric Vanderburg - JURINNOV
Server Hardening Primer - Eric Vanderburg - JURINNOVServer Hardening Primer - Eric Vanderburg - JURINNOV
Server Hardening Primer - Eric Vanderburg - JURINNOV
 
Integrity Protection for Embedded Systems
Integrity Protection for Embedded SystemsIntegrity Protection for Embedded Systems
Integrity Protection for Embedded Systems
 
Windows guest debugging presentation from KVM Forum 2012
Windows guest debugging presentation from KVM Forum 2012Windows guest debugging presentation from KVM Forum 2012
Windows guest debugging presentation from KVM Forum 2012
 
Known basic of NFV Features
Known basic of NFV FeaturesKnown basic of NFV Features
Known basic of NFV Features
 
RAC 12c
RAC 12cRAC 12c
RAC 12c
 
High performance content hosting
High performance content hosting High performance content hosting
High performance content hosting
 
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...
 
OSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install EnvironmentOSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install Environment
 
VM Forking and Hypervisor-based Fuzzing with Xen
VM Forking and Hypervisor-based Fuzzing with XenVM Forking and Hypervisor-based Fuzzing with Xen
VM Forking and Hypervisor-based Fuzzing with Xen
 
Debugging linux kernel tools and techniques
Debugging linux kernel tools and  techniquesDebugging linux kernel tools and  techniques
Debugging linux kernel tools and techniques
 
Android 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation reportAndroid 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation report
 
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)
 
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicHow to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-Baljevic
 
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...
 
Changes
ChangesChanges
Changes
 
MikroTik User Guide
MikroTik User GuideMikroTik User Guide
MikroTik User Guide
 
Advanced NDISTest options
Advanced NDISTest optionsAdvanced NDISTest options
Advanced NDISTest options
 
VM Forking and Hypervisor-based fuzzing
VM Forking and Hypervisor-based fuzzingVM Forking and Hypervisor-based fuzzing
VM Forking and Hypervisor-based fuzzing
 
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...
 
XPDS16: CPUID handling for guests - Andrew Cooper, Citrix
XPDS16:  CPUID handling for guests - Andrew Cooper, CitrixXPDS16:  CPUID handling for guests - Andrew Cooper, Citrix
XPDS16: CPUID handling for guests - Andrew Cooper, Citrix
 

Similar to Spectre meltdown performance_tests - v0.3

Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017  - ...Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017  - ...
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
AIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge ShareAIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge Share.Gastón. .Bx.
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Nuno Alves
 
See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...LinuxCon ContainerCon CloudOpen China
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Michelle Holley
 
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdf
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdfBRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdf
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdfaaajjj4
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Santosh Kangane
 
26.1.7 lab snort and firewall rules
26.1.7 lab   snort and firewall rules26.1.7 lab   snort and firewall rules
26.1.7 lab snort and firewall rulesFreddy Buenaño
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...Exploring the Final Frontier of Data Center Orchestration: Network Elements -...
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...Puppet
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
 
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...Nagios
 
PowerDRC/LVS 2.2 released by POLYTEDA
PowerDRC/LVS 2.2 released by POLYTEDAPowerDRC/LVS 2.2 released by POLYTEDA
PowerDRC/LVS 2.2 released by POLYTEDAAlexander Grudanov
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Community
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Powervc upgrade from_1.3.0.2_to_1.3.2.0
Powervc upgrade from_1.3.0.2_to_1.3.2.0Powervc upgrade from_1.3.0.2_to_1.3.2.0
Powervc upgrade from_1.3.0.2_to_1.3.2.0Gobinath Panchavarnam
 
Intro to open source telemetry linux con 2016
Intro to open source telemetry   linux con 2016Intro to open source telemetry   linux con 2016
Intro to open source telemetry linux con 2016Matthew Broberg
 
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...OpenStack Korea Community
 

Similar to Spectre meltdown performance_tests - v0.3 (20)

Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017  - ...Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017  - ...
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
AIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge ShareAIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge Share
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
 
See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...See what happened with real time kvm when building real time cloud pezhang@re...
See what happened with real time kvm when building real time cloud pezhang@re...
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...
 
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdf
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdfBRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdf
BRKRST-3066 - Troubleshooting Nexus 7000 (2013 Melbourne) - 2 Hours.pdf
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0
 
26.1.7 lab snort and firewall rules
26.1.7 lab   snort and firewall rules26.1.7 lab   snort and firewall rules
26.1.7 lab snort and firewall rules
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...Exploring the Final Frontier of Data Center Orchestration: Network Elements -...
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...
 
Rac on NFS
Rac on NFSRac on NFS
Rac on NFS
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
 
PowerDRC/LVS 2.2 released by POLYTEDA
PowerDRC/LVS 2.2 released by POLYTEDAPowerDRC/LVS 2.2 released by POLYTEDA
PowerDRC/LVS 2.2 released by POLYTEDA
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in Ceph
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Powervc upgrade from_1.3.0.2_to_1.3.2.0
Powervc upgrade from_1.3.0.2_to_1.3.2.0Powervc upgrade from_1.3.0.2_to_1.3.2.0
Powervc upgrade from_1.3.0.2_to_1.3.2.0
 
Intro to open source telemetry linux con 2016
Intro to open source telemetry   linux con 2016Intro to open source telemetry   linux con 2016
Intro to open source telemetry linux con 2016
 
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
 

More from David Pasek

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureDavid Pasek
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2David Pasek
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuDavid Pasek
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetDavid Pasek
 
Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2David Pasek
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéDavid Pasek
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3David Pasek
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture componentsDavid Pasek
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4David Pasek
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILDavid Pasek
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud ProviderDavid Pasek
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQDavid Pasek
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0David Pasek
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0David Pasek
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1David Pasek
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)David Pasek
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0David Pasek
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
 

More from David Pasek (20)

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual Architecture
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchu
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě Internet
 
Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československé
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAIL
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud Provider
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQ
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?
 

Recently uploaded

Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Recently uploaded (20)

Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Spectre meltdown performance_tests - v0.3

  • 1. David Pasek – dpasek@vmware.com VMWARE - TAM Program SPECTRE AND MELTDOWN PERFORMANCE IMPACT TESTS March 14, 2018, Document version: 0.3
  • 2. Purpose of this test plan The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched and patched ESXi host and observe performance impact. We will test impact on network, storage and memory performance because these I/O intensive workloads requires CPU caching which is impacted by vulnerabilities remediation. Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations.
  • 3. Specifications Hypervisors (ESXi) Hardware and Software Specifications ESX01 ESX01 - without any Spectre Patches • Intel NUC D54250WYKH • 1 x CPU i5-4250U @ 1.30 GHz • 1 x 2 Cores / 4 logical CPU with Hyper-threading • ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27 ESX02 ESX02 - with Spectre Patches for Hypervisor-Specific Remediation • Intel NUC D54250WYKH • 1 x CPU i5-4250U @ 1.30 GHz • 1 x 2 Cores / 4 logical CPU with Hyper-threading • ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19 ESX03 ESX03 - with Spectre Patches for Hypervisor-Specific and Hypervisor-Assisted Guest Remediation • Intel NUC D54250WYKH • 1 x CPU i5-4250U @ 1.30 GHz • 1 x 2 Cores / 4 logical CPU with Hyper-threading • ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 4. VM Hardware and Software Specifications MS-Windows MS-VM01 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed • OS – MS Windows 2012 R2 – without Spectre/Meltdown updates • IP address: 192.168.5.36 • Software ▪ CPU-Z ▪ IOmeter ▪ nuttcp ▪ Redis 3.0.503 MS-VM02 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed • OS – MS Windows 2012 R2 – without Spectre/Meltdown updates • IP address: 192.168.5.37 • Software ▪ CPU-Z ▪ IOmeter ▪ nuttcp ▪ Redis 3.0.503 MS-VM11 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed
  • 5. • OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) • IP address: 192.168.5.46 • Software ▪ CPU-Z ▪ IOmeter ▪ nuttcp ▪ Redis 3.0.503 MS-VM12 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed • OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) • IP address: 192.168.5.47 • Software ▪ CPU-Z ▪ IOmeter ▪ nuttcp ▪ Redis 3.0.503 Linux LIN-VM01 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • OS – Centos 7 – without Spectre/Meltdown updates ▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux • IP address: 192.168.5.31 • Software ▪ Redis ▪ Nuttcp ▪ Iftop ▪ Bc
  • 6. LIN-VM02 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed • OS – Centos 7 – without Spectre/Meltdown updates ▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux • IP address: 192.168.5.32 • Software ▪ Redis ▪ Nuttcp ▪ Iftop ▪ Bc LIN-VM11 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed • OS – Centos 7 – with Spectre/Meltdown updates ▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux • IP address: 192.168.5.41 • Software ▪ Redis ▪ Nuttcp ▪ Iftop ▪ Bc LIN-VM12 • 4 vCPU • 4 GB RAM • VM Hardware 11 • NIC (VMXNET3) MTU 1500 • 1x SCSI Controller – LSI Logic SAS ▪ 40 GB Disk (OS) – Thick, eager-zeroed • 1x SCSI Controller – VMware Paravirtual ▪ 5 GB Disk (DATA) – Thick, eager-zeroed
  • 7. • OS – Centos 7 – with Spectre/Meltdown updates ▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux • IP address: 192.168.5.42 • Software ▪ Redis ▪ Nuttcp ▪ Iftop ▪ Bc
  • 8. Performance Testing tools CPU-Z - https://www.cpuid.com/softwares/cpu-z.html Download: https://www.cpuid.com/downloads/cpu-z/cpu-z_1.83-en.exe CPU-Z is a freeware that gathers information on some of the main devices of your system IOMETER - http://www.iometer.org/ Download: http://www.iometer.org/doc/downloads.html IOmeter is an I/O subsystem measurement and characterization tool for single and clustered systems. NUTTCP Install: RedHat 7: yum install --enablerepo=Unsupported_EPEL nuttcp CentOS 7: yum install epel-release nuttcp MS-Windows: http://nuttcp.net/nuttcp/latest/binaries/nuttcp-8.1.4.win64.zip TTCP (Test TCP) as client/server network performance measurement tool. Usage … Server part is started by following command nuttcp -S -N 100 Client part is started by following command cat /dev/zero | nuttcp -t -s -N 100 czchoapint092 Other nuttcp examples: Server and Client nuttcp -r -S -P 5000 -N 20 cat /dev/zero | nuttcp -t -s -N 20 -P 5000 czchoapint094 Larger buffers nuttcp -r -l 8972 -S -P 5000 -N 20 cat /dev/zero | nuttcp -t -l 8972 -s -N 20 -P 5000 czchoapint094 UDP traffic nuttcp -r -u -l 8972 -w4m -S -P 5000 -N 20 cat /dev/zero | nuttcp -t -u -l 8972 -w4m -s -N 20 -P 5000 czchoapint094 REDIS - https://redis.io/ Install: CentOS 7: yum install redis MS-Windows: https://dingyuliang.me/redis-3-2-install-redis-windows/ Download: https://github.com/MicrosoftArchive/redis/releases/download/win- 3.2.100/Redis-x64-3.2.100.zip Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different
  • 9. levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
  • 10. Spectre/Meldown OS remediations ESXi Use VMware Update Manager and patches based on VMSA-2018-02 and VMSA-2018-04. MS-Windows To protect MS-Windows apply updates available here http://www.catalog.update.microsoft.com/Search.aspx?q=KB4056898 To enable the fix change Registry Settings reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession ManagerMemory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /f reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession ManagerMemory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f reg add "HKLMSOFTWAREMicrosoftWindows NTCurrentVersionVirtualization" /v MinVmVersionForCpuBasedMitigations /t REG_SZ /d "1.0" /f Restart the server for changes to take effect. Linux / Centos Use “yum update” and apply the latest OS updates. Spectre/Meldown remediation checkers ESXi ESXi command to get information if microcode is updated … if [ `vsish -e get /hardware/msr/pcpu/0/addr/0x00000048 2&>1 > /dev/null ;echo $?` -eq 0 ]; then echo -e "nIntel Security Microcode Updatedn";else echo -e "nIntel Security Microcode NOT Updatedn";fi MS-Windows MS-Windows test tool for SPECTRE/MELTDOWN remediation Installation • Article: https://support.microsoft.com/en-us/help/4073119/protect-against- speculative-execution-side-channel-vulnerabilities-in • PowerShell 5.0 is required
  • 11. Install-Module SpeculationControl Vulnarability Check (PowerShell command) Set-ExecutionPolicy RemoteSigned -Scope Currentuser Get-SpeculationControlSettings Linux / Centos Linux test tool for SPECTRE/MELTDOWN remediation Installation • Blog: https://www.cyberciti.biz/faq/check-linux-server-for-spectre-meltdown- vulnerability/ • Tool: cd /root wget –O spectre-meltdown-checker.sh https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre- meltdown-checker.sh chmod 755 ./spectre-meltdown-checker.sh Vulnarability Check (Shell command) /root/spectre-meltdown-checker.sh
  • 12. Spectre/Meldown remediation status of VMs on ESXi hosts MS-Windows MS-VM01 on ESX01 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 13. MS-VM01 on ESX02 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 14. MS-VM01 on ESX03 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 15. MS-VM02 on ESX01 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 16. MS-VM02 on ESX02 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 17. MS-VM02 on ESX03 VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 18. MS-VM11 on ESX01 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 19. MS-VM11 on ESX02 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 20. MS-VM11 on ESX03 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 21. MS-VM12 on ESX01 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 22. MS-VM12 on ESX02 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 23. MS-VM12 on ESX03 VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 24. Linux / Centos LIN-VM01 on ESX01 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 25. LIN-VM01 on ESX02 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 26. LIN-VM01 on ESX03 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 27. LIN-VM02 on ESX01 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 28. LIN-VM02 on ESX02 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 29. LIN-VM02 on ESX03 VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0- 514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 30. LIN-VM11 on ESX01 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 31. LIN-VM11 on ESX02 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 32. LIN-VM11 on ESX03 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 33. LIN-VM12 on ESX01 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
  • 34. LIN-VM12 on ESX02 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
  • 35. LIN-VM12 on ESX03 VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0- 693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
  • 36. Performance tests MS Windows OS CPU performance (Win/CPU-Z) - single VM on top of ESXi host Verification type Design Test type Performance Tested area CPU Test name CPU performance (Win/CPU-Z) of VM on top of ESXi host Test description Verification of Spectre/Meltdown security patches impact on CPU performance Tasks Step 1/ Generate CPU workload leveraging CPU-Z benchmaring tool. Run CPU-Z on MS-VM Step 2/ Note CPU performance (CPU-Z benchmark single thread and multi thread) Test combinations • ESXi host without security patches – OS without security patches (MS-VM01 on ESX01) • ESXi host without security patches – OS with security patches (MS-VM11 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (MS-VM01 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (MS-VM11 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (MS-VM11 on ESX03.) Compare results and quantify impact. Expected results Lower CPU performance on systems with security patches. Test tools: CPU-Z Test result: passed Test notes:
  • 37. Test results ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches MS-VM01 Single Thread: 233.9 235.4 237.1 236.7 236.7 AVG = 236.3 Multi Thread: 624.3 624.5 624.2 621.8 624.9 AVG = 624.3 Single Thread: 236.0 233.8 233.3 233.4 233.9 AVG = 233.7 Multi Thread: 616.6 623.4 622.4 624.6 624.1 AVG = 623.3 Single Thread: 233.8 253.6 233.0 234.6 235.1 AVG = 234.5 Multi Thread: 623.7 623.3 624.5 625.0 623.8 AVG = 624 OS with security patches MS-VM11 Single Thread: 234.5 235.3 232.0 225.8 236.5 AVG = 231.9 Multi Thread: 622.2 619.1 621.0 612.8 621.2 AVG = 620.4 Single Thread: 232.1 234.3 233.4 234.7 234.1 AVG = 233.9 Multi Thread: 622.2 623.6 621.0 620.9 622.3 AVG = 621.8 Single Thread: 208.1 207.4 202.9 206.1 209.7 AVG = 207.2 Multi Thread: 604.7 597.5 602.3 609.5 610.6 AVG = 605.5
  • 38. Storage performance (Win/IOmeter) – Single VM storage performance to local disk Verification type Design Test type Performance Tested area Storage Test name Storage performance (Win/IOmeter) – single VM storage performance to local disk Test description Verification of CPU performance impact to storage performance Tasks Step 1/ Run IOmeter GUI on MS-VM01. Step 2/ Run disk IO testing tools (VM01 with IOmeter GUI and dynamo) and generate load to disk on shared storage. I/O workload patterns for tests • 512B, 100% Random, 50% Write • 64kB, 100% Random, 50% Write Multi threading configurations • 4 Workers / 1 Outstanding IO Disk Size 10GB (20000000 sectors) Step 3/ Note storage performance (I/O per second = IOPS), data throughput(MB/s), response time (ms) and CPU load Test combinations • ESXi host without security patches – OS without security patches (MS-VM01 and MS-VM02 on ESX01) • ESXi host without security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (MS-VM01 and MS- VM02 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX03.) Compare results and quantify impact. Expected results Lower storage performance on systems with security patches. Test tools: IOmeter Test result: passed Test notes:
  • 39. Test results – I/O size 512B ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches MS-VM01 IOPS: 39424 39391 39652 39582 39839 MB/s: 20.19 20.17 20.30 20.27 20.40 RT (ms): 0.1007 0.1008 0.1001 0.1003 0.0996 CPU load (%): 32.32 33.20 34.71 34.90 33.60 AVG MB/s = 20.25 IOPS: 38829 39023 38600 38847 39521 MB/s: 19.88 19.98 19.76 19.89 20.24 RT (ms): 0.1021 0.1016 0.1027 0.1021 0.1003 CPU load (%): 32.79 32.91 35.35 35.49 34.49 AVG MB/s = 19.91 IOPS: 39596 39459 38660 38953 39775 MB/s: 20.27 20.20 19.79 19.94 20.36 RT (ms): 0.1003 0.1006 0.1027 0.1019 0.0998 CPU load (%): 32.12 33.13 35.16 35.17 34.53 AVG MB/s = 20.13 OS with security patches MS-VM11 IOPS: 37541 38074 37899 38188 38096 MB/s: 19.22 19.49 19.40 19.55 19.51 RT (ms): 0.1057 IOPS: 34076 37007 37832 32789 33052 MB/s: 17.45 18.95 19.37 16.79 16.92 RT (ms): 0.1164 IOPS: 30667 30722 30839 30179 29600 MB/s: 15.7 15.73 15.79 15.45 15.16 RT (ms): 0.1294
  • 40. 0.1042 0.1047 0.1039 0.1042 CPU load (%): 35.21 36.73 36.39 36.10 37.36 AVG MB/s = 19.47 0.1072 0.1048 0.1210 0.1200 CPU load (%): 60.28 44.48 37.29 61.28 61.60 AVG MB/s = 17.77 0.1292 0.1287 0.1316 0.1341 CPU load (%): 40.92 41.93 40.71 40.27 41.74 AVG MB/s = 15.63 Test results – I/O size 64kB ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches MS-VM01 IOPS: 7638 7566 7619 7600 7704 MB/s: 500.62 495.87 499.34 498.12 504.90 RT (ms): 0.5228 0.5278 0.5241 0.5254 0.5184 CPU load (%): 7.69 7.74 7.99 7.67 7.42 IOPS: 7593 7533 7619 7655 7620 MB/s: 497.65 493.70 499.36 501.74 499.39 RT (ms): 0.5257 0.5299 0.5239 0.5214 0.5239 CPU load (%): 8.26 7.70 8.12 8.40 8.01 IOPS: 7651 7672 7644 7672 7455 MB/s: 501.48 502.84 501.00 502.80 488.61 RT (ms): 0.5220 0.5205 0.5224 0.5205 0.5357 CPU load (%): 8.31 7.71 8.02 8.45 8.39
  • 41. AVG MB/s 499.36 AVG MB/s 498.8 AVG MB/s 501.76 OS with security patches MS-VM11 IOPS: 7607 7651 7612 7568 7603 MB/s: 498.59 501.44 498.90 495.98 498.31 RT (ms): 0.5248 0.5219 0.5245 0.5276 0.5251 CPU load (%): 9.25 9.55 9.19 9.29 9.55 AVG MB/s = 497.82 IOPS: 7599 7662 7593 7596 7657 MB/s: 497.76 502.20 497.62 497.86 501.83 RT (ms): 0.5257 0.5211 0.5257 0.5255 0.5213 CPU load (%): 9.40 9.94 8.88 8.71 9.49 AVG MB/s = 499.15 IOPS: 7541 7575 7582 7558 7556 MB/s: 494.25 496.45 496.92 495.36 495.24 RT (ms): 0.5294 0.5270 0.5265 0.5281 0.5282 CPU load (%): 12.98 12.18 12.93 13.09 12.65 AVG MB/s = 495.68
  • 42. Network performance (Win/IOmeter) between two VMs within the same ESXi host Verification type Design Test type Performance Tested area Network Test name Network performance (Win/IOmeter) between two VMs within the same ESXi host Test description Verification of CPU performance impact to network performance Tasks Step 1/ Run IOmeter GUI on MS-VM01. Step 2/ Remove all storage workers Step 3/ Run IOmeter dynamo on MS-VM02 connected to IOmeter host <hostname of VM01> … dynamo.exe –i MS-VM01 –m MS-VM02 Step 4/ Create 8 network workers. Assign specification I/O Size 512B, 100% Read to all network workers. Set test duration 30 seconds. Step 5/ Generate network workload between two MS-VM’s on the same ESXi host Step 6/ Note network performance (packets per second), throughput (MB/s), Response Time (ms) and CPU load (%) Test combinations • ESXi host without security patches – OS without security patches (MS-VM01 and MS-VM02 on ESX01) • ESXi host without security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (MS-VM01 and MS- VM02 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (MS-VM11 and MS-VM12 on ESX03.) Compare results and quantify impact. Expected results Lower network performance on systems with security patches. Test tools: IOmeter Test result: passed Test notes:
  • 43. Test results ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches MS-VM01 MS-VM02 PPS: 828726 925343 944935 935526 855468 MB/s: 424.31 473.78 483.81 478.99 438.00 RT (ms): 0.0191 0.0170 0.0167 0.0168 0.0184 CPU load (%): 56.76 60.22 59.97 60.31 58.95 AVG MB/s 463.59 PPS: 748945 896333 876467 836831 927155 MB/s: 383.46 458.92 448.75 428.46 474.70 RT (ms): 0.0211 0.0176 0.0180 0.0189 0.0170 CPU load (%): 57.18 58.88 59.98 60.53 58.93 AVG MB/s 445.37 PPS: 708065 843054 846919 803661 844828 MB/s: 362.53 431.64 433.62 411.47 432.55 RT (ms): 0.0223 0.0187 0.0186 0.0196 0.0187 CPU load (%): 56.95 58.67 58.22 60.49 57.84 AVG MB/s 425.22 OS with security patches MS-VM11 MS-VM12 PPS: 825140 921157 844023 919035 874026 MB/s: 422.47 471.63 432.14 470.55 447.50 RT (ms): PPS: 690623 885057 735112 717807 784910 MB/s: 353.60 453.15 376.38 367.52 401.87 RT (ms): PPS: 405112 432572 447930 390631 409398 MB/s: 207.42 221.48 229.34 200.00 209.61 RT (ms):
  • 44. 0.0192 0.0171 0.0187 0.0172 0.0181 CPU load (%): 60.04 59.32 59.68 59.48 58.17 AVG MB/s 450.01 0.0229 0.0178 0.0215 0.0220 0.0201 CPU load (%): 55.42 58.90 65.13 61.11 62.20 AVG MB/s 381.92 0.0390 0.0365 0.0353 0.0405 0.0387 CPU load (%): 64.71 67.31 67.34 63.33 64.43 AVG MB/s 212.83
  • 45. In-Memory database performance (Win/Redis) - single VM on top of ESXi host Verification type Design Test type Performance Tested area Database Test name Database performance from VM to In-Memory DB (Redis) Test description Verification of CPU performance impact to in-memory database performance Tasks Step 1/ Install and Run Redis DB on WIN-VM01 Step 2/ Run redis-benchmark redis-benchmark -t get,set –n 1000000 -c 8 Step 3/ Note DB performance (transactions per second) and CPU load (%) Test combinations • ESXi host without security patches – OS without security patches (VM01 on ESX01) • ESXi host without security patches – OS with security patches (VM11 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (VM01 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (VM11 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (VM11 on ESX03.) Compare results and quantify impact. Expected results Lower memory performance on systems with security patches. Test tools: RedisDB Test result: passed Test notes:
  • 46. Test results ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches WIN-VM01 TPS set,get: 139024, 148676 139801, 148389 143781, 150262 141262, 150897 139140, 150150 AVG: 140067, 149696 TPS set,get: 142247, 149566 136967, 146929 140627, 146950 142389, 148500 140666, 149009 AVG: 141180, 148153 TPS set,get: 142450, 145560 144009, 145751 145095, 148456 142511, 147907 144843, 148831 AVG: 143787, 147371 OS with security patches WIN-VM11 TPS set,get: 140528, 146049 142045, 146113 141362, 144885 142959, 142795 142836, 145921 AVG: 142081, 145618 TPS set,get: 139024, 144237 140449, 147232 138159, 144927 140193, 145836 140114, 147015 AVG: 139777, 145926 TPS set,get: 83326, 83430 82399, 83977 80457, 83885 82399, 84495 82795, 86140 AVG: 82531, 84119
  • 47. Linux OS Network performance (Linux/NUTTCP) between two VMs within the same ESXi host Verification type Design Test type Performance Tested area Network Test name Network performance (Linux/NUTTCP) between two VMs within the same ESXi host Test description Verification of CPU performance impact to network performance Tasks Step 1/ Run nuttcp -r -S -P 5501 nuttcp -r -S -P 5502 nuttcp -r -S -P 5503 nuttcp -r -S -P 5504 nuttcp -r -S -P 5505 nuttcp -r -S -P 5506 nuttcp -r -S -P 5507 nuttcp -r -S -P 5508 on LIN-VM01. Step 2/ Run iftop -F 192.168.4.32/32 on LIN-VM01 to monitor traffic Step 2/ Change IP address bellow to VM01 and run script (/tmp/run.sh) #!/bin/bash PORT_START=5501 LOGDIR="/tmp" IP="192.168.4.31" for i in `seq 1 8`; do echo "Process $i" port=$(expr $PORT_START + $i - 1) echo " port $port" logfile="$LOGDIR/job$i.log" echo " logfile $logfile" echo " target IP address $IP" ( /usr/bin/nuttcp -t -b -P $port -T 30 $IP > $logfile ) & sleep 0.1 done on LIN-VM02 to generate workload. Step 4/ Note network throughput (Mbps) of each process and calculate sum. SHOW RESULTS: cat /tmp/job* SUM: cat /tmp/job* | cut -c29-38 | paste -s -d+ | bc
  • 48. Test combinations • ESXi host without security patches – OS without security patches (LIN-VM01 and LIN-VM02 on ESX01) • ESXi host without security patches – OS with security patches (LIN-VM11 and LIN-VM12 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (LIN-VM01 and LIN- VM02 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (LIN-VM11 and LIN-VM12 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (LIN-VM11 and LIN-VM12 on ESX03.) Compare results and quantify impact. Expected results Lower network performance on systems with security patches. Test tools: IOmeter Test result: passed Test notes:
  • 49. Test results ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches LIN-VM01 LIN-VM02 Mbps: 10497.4184 10625.7293 10290.1794 10048.5660 9479.2741 AVG: 10278.7213 Mbps: 10421.9657 10157.4198 10673.1855 10052.0098 10610.0493 AVG: 10396.4783 Mbps: 9338.4298 9395.8587 10205.4507 9680.9801 8942.6938 AVG: 9471.7562 OS with security patches LIN-VM11 LIN-VM12 Mbps: 9592.9527 9761.0624 10655.8847 10283.9328 9630.8711 AVG: 9891.9554 Mbps: 10626.2253 10390.4692 9941.6684 10011.0204 10373.6655 AVG: 10258.2286 Mbps: 7794.9779 8703.3505 8383.7530 7298.3641 7165.8537 AVG: 7825.6983
  • 50. In-Memory database performance (Linux/Redis) - single VM on top of ESXi host Verification type Design Test type Performance Tested area Database Test name Database performance from VM to In-Memory DB (Redis) Test description Verification of CPU performance impact to in-memory database performance Tasks Step 1/ Install and Run Redis DB on LIN-VM01 (Linux or FreeBSD OS are required) Step 2/ Run redis-benchmark redis-benchmark -t get,set –n 1000000 -c 8 Step 3/ Note DB performance (transactions per second) and CPU load (%) Test combinations • ESXi host without security patches – OS without security patches (VM01 on ESX01) • ESXi host without security patches – OS with security patches (VM11 on ESX01) • ESXi host with Hypervisor-Specific Remediation security patches – OS without security patches (VM01 on ESX02) • ESXi host with Hypervisor-Specific Remediation security patches – OS with security patches (VM11 on ESX02) • ESXi host with Hypervisor-Specific and Hypervisor-Assisted Guest Remediation security patches – OS with security patches (VM11 on ESX03.) Compare results and quantify impact. Expected results Lower memory performance on systems with security patches. Test tools: RedisDB Test result: passed Test notes:
  • 51. Test results ESXi host without security patches ESX01 ESXi host with Hypervisor-Specific Remediation security patches ESX02 ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches ESX03 OS without security patches LIN-VM01 TPS set,get: 148323, 148964 156764, 156274 156519, 152392 157529, 150105 156617, 147645 AVG: 156633, 150487 TPS set,get: 152951, 146498 153162, 150761 154798, 151492 153562, 150082 161759, 158152 AVG: 153840, 149583 TPS set,get: 155014, 154535 159235, 144948 149947, 153209 155908, 156274 148765, 155110 AVG: 153623, 154284 OS with security patches LIN-VM11 TPS set,get: 106860, 105674 108742, 103896 105697, 117882 104242, 104482 105842, 104253 AVG: 106133, 104803 TPS set,get: 104471, 106224 105808, 109218 106269, 104931 102333, 105529 116918, 107723 AVG: 105516, 106492 TPS set,get: 43903, 43929 43903, 43975 44062, 43821 43869, 43635 44062, 43780 AVG: 43956, 43843
  • 52. Findings CPU Performance on MS Windows ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches MS Windows 2012 R2 without security patches Single Thread: 236.3 Multi Thread: 624.3 Single Thread: 233.7 Multi Thread: 623.3 Single Thread: 234.5 Multi Thread: 624 MS Windows 2012 R2 with security patches Single Thread: 231.9 Multi Thread: 620.4 Single Thread: 233.9 Multi Thread: 621.8 Single Thread: 207.2 Multi Thread: 605.5 Secured system performance impact CPU Single Thread ~ -12% << this is probably because ESXi hardware has just a 2 CPU cores (Intel NUC) and ESXi VMkernel is probably using more CPU resources on CPU core 0. Such performance impact was not observed on enterprise server hardware where the impact in single CPU thread was negligible. CPU Multi Thread ~ -3% Storage performance (Win/IOmeter) – I/O size 512B ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches MS Windows 2012 R2 without security patches 20.25 MB/s 39550 IOPS 19.91 MB/s 38887 IOPS 20.13 MB/s 39316 IOPS MS Windows 2012 R2 with security patches 19.47 MB/s 38027 IOPS 17.77 MB/s 34707 IOPS 15.63 MB/s 30527 IOPS Secured system storage performance impact is ~ -23%
  • 53. Storage performance (Win/IOmeter) – I/O size 64kB ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches MS Windows 2012 R2 without security patches 499.36 MB/s 7619 IOPS 498.8 MB/s 7611 IOPS 501.76 MB/s 7656 IOPS MS Windows 2012 R2 with security patches 497.82 MB/s 7596 IOPS 499.15 MB/s 7616 IOPS 495.68 MB/s 7563 IOPS Secured system performance impact is ~ 1% which is negligible. In other words, for larger I/O size negative performance impact has not been observed. Network performance (Win/IOmeter) – I/O size 512B ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches MS Windows 2012 R2 without security patches 463.59 MB/s 445.37 MB/s 425.22 MB/s MS Windows 2012 R2 with security patches 450.01 MB/s 381.92 MB/s 212.83 MB/s Secured system network performance impact is ~ -54% << Even bigger impact (~ 60%) was observed on enterprise server hardware
  • 54. In-Memory database performance (Win/Redis) ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches MS Windows 2012 R2 without security patches TPS set,get: 140067, 149696 TPS set,get: 141180, 148153 TPS set,get: 143787, 147371 MS Windows 2012 R2 with security patches TPS set,get: 142081, 145618 TPS set,get: 139777, 145926 TPS set,get: 82531, 84119 Secured system memory performance impact is ~ -42% << Similar impact (~ 40%) for set transaction but even bigger impact (~ 50%) was observed for get transaction on enterprise server hardware Network performance (Linux/NUTTCP) ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches CentOS 7 without security patches 10278.72 Mbps 10396.48 Mbps 9471.76 Mbps CentOS 7 with security patches 9891.96 Mbps 10258.23 Mbps 7825.7 Mbps Secured system network performance impact is ~ -24% << Less impact (~ 9%) was observed on enterprise server hardware In-Memory database performance (Linux/Redis) ESXi host without security patches ESXi host with Hypervisor-Specific Remediation security patches ESXi host with Hypervisor-Specific and Hypervisor- Assisted Guest Remediation security patches CentOS 7 without security patches TPS set,get: 156633, 150487 TPS set,get: 153840, 149583 TPS set,get: 153623, 154284 CentOS 7 with security patches TPS set,get: 106133, 104803 TPS set,get: 105516, 106492 TPS set,get: 43956, 43843 Secured system memory performance impact is ~ -70% << Similar impact was observed on enterprise server hardware
  • 55. Conclusion Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations. Tests in this document are focused on CPU, Memory, Storage and Network. It is worth to mention that these tests are synthetic created to test the impact on specific infrastructure component. Real workloads are usually mix of CPU, Memory, Storage and Network, therefore the impact is the combination of extreme impacts of these synthetic tests. The performance impact of VMware ESXi patches We did not observe performance penalty after application of ESXi patches (Hypervisor- Specific and Hypervisor-Assisted Guest Remediation security patches). The performance penalty on CPU, Memory and Storage was observed after application of security patches in to Guest Operating Systems and CPU Microcode. The only exception are Network performance tests where we have observed up to 8% performance penalty after application of ESXi patches even the Guest OS was still unpatched. The performance impact of GuestOS and CPU Microcode patches After application of all security remediation for Windows 2012 R2 and ESXi 6.5 we have observed following performance impacts • CPU o ~ 12% negative performance impact on single thread CPU performance o ~ 3% (negligible) negative performance impact on multi thread CPU performance • Memory o ~ 42% negative performance impact on memory performance • Storage o ~ 23% negative performance impact on storage performance with small I/O size (512B) o No performance impact on storage performance with 64kB I/O size • Network o ~ 54% negative performance impact on network performance impact with small I/O size (512B) After application of all security remediation for CentOS 7 and ESXi 6.5 we have observed following performance impacts • Memory o ~ 70% negative performance impact on memory performance • Network o ~ 24% negative performance impact on network performance