SlideShare a Scribd company logo
1 of 56
Cormac Hogan
Cody Hosterman
SER1143BU
#VMworld #SER1143BU
A Deep Dive into
vSphere 6.5 Core Storage
Features and
Functionality
• This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these
features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or
sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not
been determined.
Disclaimer
2#SER1143BU CONFIDENTIAL
Introduction
Welcome from Cormac and Cody
• Cormac
• Director and Chief Technologist
• VMware
• @CormacJHogan
• http://cormachogan.com
4
• Cody
• Technical Director for VMware Solutions
• Pure Storage
• @CodyHosterman
• https://codyhosterman.com
#SER1143BU CONFIDENTIAL
Agenda Slide
1 Limits
2 VMFS-6
3 VAAI (ATS Miscompare and UNMAP)
4 SPBM (SIOCv2 and vSphere VM Encryption)
5 NFS v4.1
6 iSCSI
7 NVMe
5#SER1143BU CONFIDENTIAL
vSphere 6.5 Storage Limits
vSphere 6.5 Scaling and Limits
Paths
• ESXi hosts now support up to 2000 paths
– Increase from the 1024 per host paths supported previously
Devices
• ESXi hosts now support up to 512 devices
– Increase from the 256 devices supported per host previously
– Multiple targets are required to address more than 256 devices
– This does not impact Virtual Volumes (aka VVols), which can address 16,383 VVol per PE
7#SER1143BU CONFIDENTIAL
vSphere 6.5 Scaling and Limits
• 512e Advanced Format Device Support
• Capacity limits are now an issue with 512n (native) sector size used currently in disk drives
• New Advanced Format (AF) drives use a 4K native sector size for higher capacity
• These 4Kn devices are not yet supported on vSphere
• For legacy applications and operating systems that cannot support 4KN drives, new 4K sector
size drives that run in 512 emulation (512e) mode are now available
– These drives will have a physical sector size of 4K but the logical sector size of 512 bytes
• These drives are now supported on vSphere 6.5 for VMFS and RDM (Raw Device Mappings)
8#SER1143BU CONFIDENTIAL
512n/512e
• # esxcli storage core device capacity list
9
[root@esxi-dell-e:~] esxcli storage core device capacity list
Device Physical Logical Logical Size Format Type
Blocksize Blocksize Block Count
------------------------------------ --------- --------- ----------- ---------- -----------
naa.624a9370d4d78052ea564a7e00011014 512 512 20971520 10240 MiB 512n
naa.624a9370d4d78052ea564a7e00011015 512 512 20971520 10240 MiB 512n
naa.624a9370d4d78052ea564a7e00011138 512 512 1048576000 512000 MiB 512n
naa.624a9370d4d78052ea564a7e00011139 512 512 1048576000 512000 MiB 512n
naa.55cd2e404c31fa00 4096 512 390721968 190782 MiB 512e
naa.500a07510f86d6bb 4096 512 1562824368 763097 MiB 512e
naa.500a07510f86d685 4096 512 1562824368 763097 MiB 512e
naa.5001e820026415f0 512 512 390721968 190782 MiB 512n
512e (emulated)
512n (native)
#SER1143BU CONFIDENTIAL
DSNRO
The setting “Disk.SchedNumReqOutstanding” aka “No of outstanding IOs with
competing worlds” has changed in behavior
• DSRNO can be set to a maximum of:
– 6.0 and earlier: 256
– 6.5 and on: Whatever the HBA Device Queue Depth Limit is
• Allows for extreme levels of performance
10#SER1143BU CONFIDENTIAL
VMFS-6
VMFS-6: On-disk Format Changes
• File System Resource Management - File Block Format
• VMFS-6 has two new “internal” block sizes, small file block (SFB) and large file block (LFB)
– The SFB size is set to 1MB; the LFB size is set to 512MB
– These are internal concepts for ”files” only; the VMFS block size is still 1MB
• Thin disks created on VMFS-6 are initially backed with SFBs
• Thick disks created on VMFS-6 are allocated LFBs as much as possible
– For the portion of the thick disk which does not fit into an LFB, SFBs are allocated
• These enhancements should result in much faster file creation times
– Especially true with swap file creation so long as the swap file can be created with all LFBs
– Swap files are always thickly provisioned
12
VMFS-6
#SER1143BU CONFIDENTIAL
VMFS-6: On-disk Format Changes
• Dynamic System Resource Files
• System resource files (.fdc.sf, .pbc.sf, .sbc.sf, .jbc.sf) are now extended dynamically
for VMFS-6
– Previously these were static in size
– These may show a much smaller size initially, when compared to previous versions of
VMFS, but they will grow over time
• If the filesystem exhausts any resources, the respective system resource file is
extended to create additional resources
• VMFS-6 can now support millions of files / pointer blocks / sub blocks (as long as
volume has free space)
13
VMFS-6
#SER1143BU CONFIDENTIAL
vmkfstools – 500GB VMFS-5 Volume
#SER1143BU CONFIDENTIAL 14
# vmkfstools –P –v10 /vmfs/devices/<device id>
vmkfstools – 500GB VMFS-6 volume
15
Large file blocksIn VMFS-6, Sub Blocks for are used for Pointer Blocks.
That's why Ptr Blocks max is shown as 0 here
#SER1143BU CONFIDENTIAL
VMFS-6: On-disk Format Changes
• File System Resource Management - Journaling
• VMFS is a distributed journaling filesystem
• Journals are used on VMFS when performing metadata updates on the filesystem
• Previous versions of VMFS used regular file blocks as journal resource blocks
• In VMFS-6, journal blocks tracked in a separate system resource file called .jbc.sf.
• Introduced to address VMFS journal related issues on previous versions of VMFS,
due to the use of regular files blocks as journal blocks and vice-versa
– E.g. full file system, see VMware KB article 1010931
16
VMFS-6
#SER1143BU CONFIDENTIAL
New Journal System File Resource
17
VMFS-5
VMFS-6
#SER1143BU CONFIDENTIAL
VMFS-6: VM-based Block Allocation Affinity
• Resources for VMs (blocks, file descriptors, etc.) on earlier VMFS versions were
allocated on a per host basis (host-based block allocation affinity)
• Host contention issues arose when a VM/VMDK was created on one host, and then
vMotion was used to migrate the VM to another host
• If additional blocks were allocated to the VM/VMDK by the new host at the same time
as the original host tried to allocate blocks for a different VM in the same resource
group, the different hosts could contend for resource locks on the same resource
• This change introduces VM-based block allocation affinity, which will decrease
resource lock contention
18
VMFS-6
#SER1143BU CONFIDENTIAL
VMFS-6: Parallelism/Concurrency Improvements
• Some of the biggest delays on VMFS were in device scanning and filesystem probing
• vSphere 6.5 has new, highly parallel, device discovery and filesystem probing
mechanisms
– Previous versions of VMFS only allowed one transaction at a time per host on a given
filesystem; VMFS-6 supports multiple, concurrent transactions at a time per host
• These improvements are significant for fail-over event, and Site Recover Manager
(SRM) should especially benefit
• Requirement to support higher limits on number of devices and paths in vSphere 6.5
19
VMFS-6
#SER1143BU CONFIDENTIAL
Hot Extend Support
• Prior to ESXi 6.5, VMDKs on a powered on VM
could only be grown if size was less than 2TB
• If the size of a VMDK was 2TB or larger, or the
expand operation caused it to exceed 2TB, the hot
extend operation would fail
• This required administrators to typically shut down
the virtual machine to expand it beyond 2TB
• The behavior has been changed in vSphere 6.5
and hot extend no longer has this limitation
20
This is a vSphere 6.5 improvement, not specific to VMFS-6.
This will also work on VMFS-5 volumes.
#SER1143BU CONFIDENTIAL
“Upgrading” to VMFS-6
• No direct ‘in-place’ upgrade of filesystem to VMFS-6 available.
New datastores only.
• Customers upgrading to vSphere 6.5 release should continue to
use VMFS-5 datastores (or older) until they can create new
VMFS-6 datastores
• Use migration techniques such as Storage vMotion to move
VMs from the old datastore to the new VMFS-6 datastore
21#SER1143BU CONFIDENTIAL
22
VMFS-6
Performance Improvements with resignature
(discovery and filesystem probing)
#SER1143BU CONFIDENTIAL
23#SER1143BU CONFIDENTIAL
VAAI
vSphere APIs for Array Integration
ATS Miscompare Handling (1 of 3)
• The heartbeat region of VMFS is used
for on-disk locking
• Every host that uses the VMFS volume
has its own heartbeat region
• This region is updated by the host on
every heartbeat
• The region that is updated is the time
stamp, which tells others that this host
is alive
• When the host is down, this region is
used to communicate lock state to
other hosts
25
ATS
#SER1143BU CONFIDENTIAL
ATS Miscompare Handling (2 of 3)
• In vSphere 5.5 U2, we started using ATS for maintaining the heartbeat
• ATS is the Atomic Test and Set primitive which is one of the VAAI primitives
• Prior to this release, we only used ATS when the heartbeat state changed
• For example, we would use ATS in the following cases:
– Acquire a heartbeat
– Clear a heartbeat
– Replay a heartbeat
– Reclaim a heartbeat
• We did not use ATS for maintaining the ‘liveness’ of a heartbeat
• This change for using ATS to maintain ‘liveness’ of a heartbeat appears to have led to issues for
certain storage arrays
26
ATS
#SER1143BU CONFIDENTIAL
ATS Miscompare Handling (3 of 3)
• When an ATS Miscompare is received, all outstanding IO is aborted
• This led to additional stress and load being placed on the storage arrays
– In some cases, this led to the controllers crashing on the array
• In vSphere 6.5, there are new heuristics added so that when we get a miscompare event, we
retry the read and verify that there is a miscompare
• If the miscompare is real, then we do the same as before, i.e. abort outstanding I/O
• If the on-disk HB data has not changed, then this is a false miscompare
• In the event of a false miscompare:
– VMFS will not immediately abort IOs
– VMFS will re-attempt ATS HB after a short interval (usually less than 100ms)
27
ATS
#SER1143BU CONFIDENTIAL
An Introduction to UNMAP
UNMAP via datastore
• VAAI UNMAP was introduced in vSphere 5.0
• Enables ESXi host to inform the backing storage that files or VMs had be moved or
deleted from a Thin Provisioned VMFS datastore
• Allows the backing storage to reclaim the freed blocks
• No way of doing this previously, resulting in stranded space on Thin Provisioned
VMFS datastores
28
UNMAP
#SER1143BU CONFIDENTIAL
Automated UNMAP in vSphere 6.5
Introducing Automated UNMAP Space Reclamation
• In vSphere 6.5, there is now an automated UNMAP crawler mechanism for
reclaiming dead or stranded space on VMFS datastores
• Now UNMAP will run continuously in the background
• UNMAP granularity on the storage array
– The granularity of the reclaim is set to 1MB chunk
– Automatic UNMAP is not supported on arrays with UNMAP granularity greater
than 1MB
– Auto UNMAP feature support is footnoted in the VMware Hardware
Compatibility Guide (HCL)
29
UNMAP
#SER1143BU CONFIDENTIAL
Some Considerations with Automated UNMAP
• Only is issued to VMFS datastores that are VMFS-6 and have powered-on VMs
• Can take 12-24 hours to fully reclaim
• Default behavior is turned on, but can be turned off on the host (host won’t
participate) …
– EnableVMFS6Unmap
• …or on the datastore (no hosts will reclaim it)
30
UNMAP
#SER1143BU CONFIDENTIAL
An Introduction to Guest OS UNMAP
UNMAP via Guest OS
• In vSphere 6.0, additional improvements to UNMAP facilitate the reclaiming of
stranded space from within a Guest OS
• Effectively, ability for a Guest OS in a thinly provisioned VM to tell the backing
storage that blocks
• Backing storage to reclaim this capacity, and shrink the size of the VMDK
31
UNMAP
#SER1143BU CONFIDENTIAL
Some Considerations with Automated UNMAP
TRIM Handling
• UNMAP work at certain block boundaries on VMFS, whereas TRIM does not have
such restrictions
• While this should be fine on VMFS-6, which is now 4K aligned, certain TRIMs
converted into UNMAPs may fail due to block alignment issues on previous
versions of VMFS
Linux Guest OS SPC-4 support
• Initially in-guest UNMAP support to reclaim in-guest dead space natively was
limited to Windows 2012 R2
• Linux distributions check the SCSI version, and unless it is version 5 or greater, it
does not send UNMAPs
• With SPC-4 support introduced in vSphere 6.5, Linux Guest OS’es will now also
be able to issue UNMAPs
32
UNMAP
#SER1143BU CONFIDENTIAL
Automated UNMAP Limits and Considerations
Guest OS filesystem alignment
• VMDK alignment is aligned on 1 MB block boundaries
• However un-alignment may still occur within the guest OS filesystem
• This may also prevent UNMAP from working correctly
• A best practice is to align guest OS partitions to the 1MB granularity
boundary
33
UNMAP
#SER1143BU CONFIDENTIAL
Known Automated UNMAP Issues
• vSphere 6.5
– Tools in guest operating system might send unmap requests that are not aligned
to the VMFS unmap granularity.
– Such requests are not passed to the storage array for space reclamation.
– Further info in KB article 2148987
– This issue is addressed in vSphere 6.5 P01
• vSphere 6.5 P01
– Certain versions of Windows Guest OS running in a VM may appear
unresponsive if UNMAP is used.
– Further info in KB article 2150591.
– This issue is addressed in vSphere 6.5 U1.
34
UNMAP
#SER1143BU CONFIDENTIAL
35
UNMAP in action
#SER1143BU CONFIDENTIAL
36#SER1143BU CONFIDENTIAL
SPBM
Storage Policy Based Management
The Storage Policy Based Management (SPBM) Paradigm
• SPBM is the foundation of
VMware's Software Defined
Storage vision
• Common framework to allow
storage and host related
capabilities to be consumed
via policies.
• Applies data services (e.g.
protection, encryption,
performance) on a per VM, or
even per VMDK level
38#SER1143BU CONFIDENTIAL
Creating Policies via Rules and Rule Sets
• Rule
– A Rule references a combination of a metadata tag and a related value, indicating
the quality or quantity of the capability that is desired
– These two items act as a key and a value that, when referenced together through
a Rule, become a condition that must be met for compliance
• Rule Sets
– A Rule Set is comprised of one or more Rules
– A storage policy includes one or more Rule Sets that describe requirements for
virtual machine storage resources
– Multiple “Rule Sets” can be leveraged to allow a single storage policy to define
alternative selection parameters, even from several storage providers
39#SER1143BU CONFIDENTIAL
#SER1143BU CONFIDENTIAL 40
VAIO
vSAN,
VVOLs,
VMFS
41
SPBM and Common Rules
for
Data Services provided by hosts
- VM Encryption
- Storage I/O Control v2
#SER1143BU CONFIDENTIAL
42
2 new features introduced with vSphere 6.5
- Encryption
- Storage I/O Control v2
Implementation is done via I/O Filters
#SER1143BU CONFIDENTIAL
#SER1143BU CONFIDENTIAL
Storage I/O Control v2
• VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”
• These are used for configuring data services provided by hosts, such as Storage I/O Control
and Encryption. It is the same mechanism used for VAIO/IO Filters
43
vSphere VM Encryption
• vSphere 6.5 introduces a new VM encryption mechanism
• It requires an external Key Management Server (KMS). Check the HCL for supported vendors
• This encryption mechanism is implemented in the hypervisor, making vSphere VM encryption
agnostic to the Guest OS
• This not only encrypts the VMDK, but it also encrypts some of the VM Home directory contents,
e.g. VMX file, metadata files, etc.
• Like SIOCv2, vSphere VM Encryption in vSphere 6.5 is policy driven
44#SER1143BU CONFIDENTIAL
#SER1143BU CONFIDENTIAL
vSphere VM Encryption I/O Filter
• Common rules must be enabled add vSphere VM Encryption to a policy.
• Only setting in the custom encryption policy is to allow I/O filters before encryption.
45
46
VM Encryption and SIOC policy
#SER1143BU CONFIDENTIAL
47#SER1143BU CONFIDENTIAL
NFS v4.1 Improvements
NFS v4.1 Improvements
• Hardware Acceleration/VAAI-NAS Improvements
– NFS 4.1 client in vSphere 6.5 supports hardware acceleration by offloading certain
operations to the storage array.
– This comes in the form of a plugin to the ESXi host that is developed/provided by the
storage array partner.
– Refer to your NAS storage array vendor for further information.
• Kerberos IPv6 Support
– NFS v4.1 Kerberos adds IPV6 support in vSphere 6.5.
• Kerberos AES Encryption Support
– NFS v4.1 Kerberos adds Advanced Encryption Standards (AES) encryption support in
vSphere 6.5
49
NFSv4.1
#SER1143BU CONFIDENTIAL
iSCSI Improvements
iSCSI Enhancements
• ISCSI Routing and Port Binding
– ESXi 6.5 now supports having the iSCSI initiator and the iSCSI target residing in different
network subnets with port binding
• UEFI iSCSI Boot
– VMware now supports UEFI (Unified Extensible Firmware Interface) iSCSI Boot on Dell
13th generation servers with Intel x540 dual port Network Interface Card (NIC).
51
iSCSI
#SER1143BU CONFIDENTIAL
NVMe Support
NVMe (1 of 2)
• Virtual NVMe Device
– New virtual storage HBA for all flash
SAN/vSAN storages
– New Operating Systems now leverage
multiple queues with NVMe devices
• Virtual NVMe device allows VMs to take
advantage of such in-guest IO stack
improvements
– Improved performance compared to Virtual
SATA device on local PCIe SSD devices
• Virtual NVMe device provides 30-50%
lower CPU cost per I/O
• Virtual NVMe device achieve 30-80%
higher IOPS
53#SER1143BU CONFIDENTIAL
NVMe (2 of 2)
• Supported configuration information of virtual NVMe device.
54
Number of Controllers per VM 4 Enumerated as nvme0,…, nvme3.
Number of namespaces per controller 15
Each namespace is mapped to a virtual disk.
Enumerated as nvme0:0, …, nvme0:15
Maximum queues and interrupts 16 1 admin + 15 I/O queues
Maximum queue depth 256 4K in-flight commands per controller
• Supports NVMe Specification v1.0e mandatory admin and I/O commands
• Interoperability with all existing vSphere features, except SMP-FT
#SER1143BU CONFIDENTIAL
Cormac Hogan
chogan@vmware.com
@cormacjhogan
Cody Hosterman
cody@purestorage.com
@codyhosterman

More Related Content

What's hot

Mise en place de zabbix sur Ubuntu 22.04
Mise en place de zabbix sur Ubuntu 22.04Mise en place de zabbix sur Ubuntu 22.04
Mise en place de zabbix sur Ubuntu 22.04ImnaTech
 
Jenkins를 활용한 Openshift CI/CD 구성
Jenkins를 활용한 Openshift CI/CD 구성 Jenkins를 활용한 Openshift CI/CD 구성
Jenkins를 활용한 Openshift CI/CD 구성 rockplace
 
Ansible roles done right
Ansible roles done rightAnsible roles done right
Ansible roles done rightDan Vaida
 
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃうフレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう株式会社オプト 仙台ラボラトリ
 
Git branching strategies
Git branching strategiesGit branching strategies
Git branching strategiesjstack
 
[FR] Présentatation d'Ansible
[FR] Présentatation d'Ansible [FR] Présentatation d'Ansible
[FR] Présentatation d'Ansible Armand Guio
 
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!Masahiko Ebisuda
 
Devoxx : being productive with JHipster
Devoxx : being productive with JHipsterDevoxx : being productive with JHipster
Devoxx : being productive with JHipsterJulien Dubois
 
Jenkinsとamazon ecsで コンテナCI
Jenkinsとamazon ecsで コンテナCIJenkinsとamazon ecsで コンテナCI
Jenkinsとamazon ecsで コンテナCIshigeyuki azuchi
 
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?OpenStack Korea Community
 
Getting started with Ansible
Getting started with AnsibleGetting started with Ansible
Getting started with AnsibleIvan Serdyuk
 
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsi
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsiRoom 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsi
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsiVietnam Open Infrastructure User Group
 
02.실전! 시스템 관리자를 위한 Ansible
02.실전! 시스템 관리자를 위한 Ansible02.실전! 시스템 관리자를 위한 Ansible
02.실전! 시스템 관리자를 위한 AnsibleOpennaru, inc.
 
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!株式会社クライム
 
Red Hat Ansible 적용 사례
Red Hat Ansible 적용 사례Red Hat Ansible 적용 사례
Red Hat Ansible 적용 사례Opennaru, inc.
 
GitHub Actions (Nakov at RuseConf, Sept 2022)
GitHub Actions (Nakov at RuseConf, Sept 2022)GitHub Actions (Nakov at RuseConf, Sept 2022)
GitHub Actions (Nakov at RuseConf, Sept 2022)Svetlin Nakov
 
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Vietnam Open Infrastructure User Group
 

What's hot (20)

Ansible - Introduction
Ansible - IntroductionAnsible - Introduction
Ansible - Introduction
 
Mise en place de zabbix sur Ubuntu 22.04
Mise en place de zabbix sur Ubuntu 22.04Mise en place de zabbix sur Ubuntu 22.04
Mise en place de zabbix sur Ubuntu 22.04
 
Jenkins를 활용한 Openshift CI/CD 구성
Jenkins를 활용한 Openshift CI/CD 구성 Jenkins를 활용한 Openshift CI/CD 구성
Jenkins를 활용한 Openshift CI/CD 구성
 
Ansible roles done right
Ansible roles done rightAnsible roles done right
Ansible roles done right
 
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃうフレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう
フレームワークも使っていないWebアプリをLaravel+PWAでモバイルアプリっぽくしてみちゃう
 
Git branching strategies
Git branching strategiesGit branching strategies
Git branching strategies
 
[FR] Présentatation d'Ansible
[FR] Présentatation d'Ansible [FR] Présentatation d'Ansible
[FR] Présentatation d'Ansible
 
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!
新しくなったAzure Stack HCIは以前と何が違うのか?もう一度ゼロからしっかり整理します!
 
Devoxx : being productive with JHipster
Devoxx : being productive with JHipsterDevoxx : being productive with JHipster
Devoxx : being productive with JHipster
 
Jenkinsとamazon ecsで コンテナCI
Jenkinsとamazon ecsで コンテナCIJenkinsとamazon ecsで コンテナCI
Jenkinsとamazon ecsで コンテナCI
 
Ansible
AnsibleAnsible
Ansible
 
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
 
Getting started with Ansible
Getting started with AnsibleGetting started with Ansible
Getting started with Ansible
 
ansible why ?
ansible why ?ansible why ?
ansible why ?
 
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsi
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsiRoom 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsi
Room 1 - 2 - Nguyễn Văn Thắng & Dzung Nguyen - Proxmox VE và ZFS over iscsi
 
02.실전! 시스템 관리자를 위한 Ansible
02.실전! 시스템 관리자를 위한 Ansible02.실전! 시스템 관리자를 위한 Ansible
02.실전! 시스템 관리자를 위한 Ansible
 
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!
『VMware Cloud on AWS』×『Veeam』移行/データ保護の最適解はこれだ!
 
Red Hat Ansible 적용 사례
Red Hat Ansible 적용 사례Red Hat Ansible 적용 사례
Red Hat Ansible 적용 사례
 
GitHub Actions (Nakov at RuseConf, Sept 2022)
GitHub Actions (Nakov at RuseConf, Sept 2022)GitHub Actions (Nakov at RuseConf, Sept 2022)
GitHub Actions (Nakov at RuseConf, Sept 2022)
 
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
Room 1 - 7 - Lê Quốc Đạt - Upgrading network of Openstack to SDN with Tungste...
 

Viewers also liked

VMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSANVMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSANDuncan Epping
 
VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design Cormac Hogan
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based ManagementCormac Hogan
 
Advancedtroubleshooting 101208145718-phpapp01
Advancedtroubleshooting 101208145718-phpapp01Advancedtroubleshooting 101208145718-phpapp01
Advancedtroubleshooting 101208145718-phpapp01Suresh Kumar
 
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Suresh Kumar
 
VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!VMworld
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld
 
01 t1 s2_linux_lesson1
01 t1 s2_linux_lesson101 t1 s2_linux_lesson1
01 t1 s2_linux_lesson1Niit Care
 
Linux On V Mware ESXi
Linux On V Mware ESXiLinux On V Mware ESXi
Linux On V Mware ESXiMasafumi Ohta
 
Linux Administration
Linux AdministrationLinux Administration
Linux AdministrationHarish1983
 
Linux ppt
Linux pptLinux ppt
Linux pptlincy21
 
2017 holiday survey: An annual analysis of the peak shopping season
2017 holiday survey: An annual analysis of the peak shopping season2017 holiday survey: An annual analysis of the peak shopping season
2017 holiday survey: An annual analysis of the peak shopping seasonDeloitte United States
 
Inside Google's Numbers in 2017
Inside Google's Numbers in 2017Inside Google's Numbers in 2017
Inside Google's Numbers in 2017Rand Fishkin
 

Viewers also liked (17)

VMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSANVMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSAN
 
VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
Advancedtroubleshooting 101208145718-phpapp01
Advancedtroubleshooting 101208145718-phpapp01Advancedtroubleshooting 101208145718-phpapp01
Advancedtroubleshooting 101208145718-phpapp01
 
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
Advancedperformancetroubleshootingusingesxtop 101110131727-phpapp02
 
ESX performance problems 10 steps
ESX performance problems 10 stepsESX performance problems 10 steps
ESX performance problems 10 steps
 
Top ESXi command line v2.0
Top ESXi command line v2.0Top ESXi command line v2.0
Top ESXi command line v2.0
 
VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
 
Linux Administration
Linux AdministrationLinux Administration
Linux Administration
 
01 t1 s2_linux_lesson1
01 t1 s2_linux_lesson101 t1 s2_linux_lesson1
01 t1 s2_linux_lesson1
 
Linux On V Mware ESXi
Linux On V Mware ESXiLinux On V Mware ESXi
Linux On V Mware ESXi
 
Linux Administration
Linux AdministrationLinux Administration
Linux Administration
 
Linux ppt
Linux pptLinux ppt
Linux ppt
 
2017 holiday survey: An annual analysis of the peak shopping season
2017 holiday survey: An annual analysis of the peak shopping season2017 holiday survey: An annual analysis of the peak shopping season
2017 holiday survey: An annual analysis of the peak shopping season
 
Inside Google's Numbers in 2017
Inside Google's Numbers in 2017Inside Google's Numbers in 2017
Inside Google's Numbers in 2017
 
The AI Rush
The AI RushThe AI Rush
The AI Rush
 

Similar to VMworld 2017 Core Storage

VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
 
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts VMworld
 
Presentation oracle rac on vsphere 5
Presentation   oracle rac on vsphere 5Presentation   oracle rac on vsphere 5
Presentation oracle rac on vsphere 5solarisyourep
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Duncan Epping
 
VMworld 2013: What's New in vSphere Platform & Storage
VMworld 2013: What's New in vSphere Platform & Storage VMworld 2013: What's New in vSphere Platform & Storage
VMworld 2013: What's New in vSphere Platform & Storage VMworld
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationpittmantony
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld
 
Cloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationCloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationsolarisyourep
 
Cloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationCloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationxKinAnx
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentationvirtualsouthwest
 
Oracle Flex ASM - What’s New and Best Practices by Jim Williams
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsOracle Flex ASM - What’s New and Best Practices by Jim Williams
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsMarkus Michalewicz
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld
 
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld
 
Dp sql restore latest
Dp sql restore latestDp sql restore latest
Dp sql restore latestJyothirmaiG4
 

Similar to VMworld 2017 Core Storage (20)

vSphere
vSpherevSphere
vSphere
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
 
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts
VMworld 2013: Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts
 
Presentation oracle rac on vsphere 5
Presentation   oracle rac on vsphere 5Presentation   oracle rac on vsphere 5
Presentation oracle rac on vsphere 5
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
VMworld 2013: What's New in vSphere Platform & Storage
VMworld 2013: What's New in vSphere Platform & Storage VMworld 2013: What's New in vSphere Platform & Storage
VMworld 2013: What's New in vSphere Platform & Storage
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentation
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
 
Cloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationCloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentation
 
Cloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentationCloud infrastructure licensing and pricing customer presentation
Cloud infrastructure licensing and pricing customer presentation
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
Oracle Flex ASM - What’s New and Best Practices by Jim Williams
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsOracle Flex ASM - What’s New and Best Practices by Jim Williams
Oracle Flex ASM - What’s New and Best Practices by Jim Williams
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
 
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep Dive
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
Dp sql restore latest
Dp sql restore latestDp sql restore latest
Dp sql restore latest
 

Recently uploaded

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 

Recently uploaded (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

VMworld 2017 Core Storage

  • 1. Cormac Hogan Cody Hosterman SER1143BU #VMworld #SER1143BU A Deep Dive into vSphere 6.5 Core Storage Features and Functionality
  • 2. • This presentation may contain product features that are currently under development. • This overview of new technology represents no commitment from VMware to deliver these features in any generally available product. • Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. • Technical feasibility and market demand will affect final delivery. • Pricing and packaging for any new technologies or features discussed or presented have not been determined. Disclaimer 2#SER1143BU CONFIDENTIAL
  • 4. Welcome from Cormac and Cody • Cormac • Director and Chief Technologist • VMware • @CormacJHogan • http://cormachogan.com 4 • Cody • Technical Director for VMware Solutions • Pure Storage • @CodyHosterman • https://codyhosterman.com #SER1143BU CONFIDENTIAL
  • 5. Agenda Slide 1 Limits 2 VMFS-6 3 VAAI (ATS Miscompare and UNMAP) 4 SPBM (SIOCv2 and vSphere VM Encryption) 5 NFS v4.1 6 iSCSI 7 NVMe 5#SER1143BU CONFIDENTIAL
  • 7. vSphere 6.5 Scaling and Limits Paths • ESXi hosts now support up to 2000 paths – Increase from the 1024 per host paths supported previously Devices • ESXi hosts now support up to 512 devices – Increase from the 256 devices supported per host previously – Multiple targets are required to address more than 256 devices – This does not impact Virtual Volumes (aka VVols), which can address 16,383 VVol per PE 7#SER1143BU CONFIDENTIAL
  • 8. vSphere 6.5 Scaling and Limits • 512e Advanced Format Device Support • Capacity limits are now an issue with 512n (native) sector size used currently in disk drives • New Advanced Format (AF) drives use a 4K native sector size for higher capacity • These 4Kn devices are not yet supported on vSphere • For legacy applications and operating systems that cannot support 4KN drives, new 4K sector size drives that run in 512 emulation (512e) mode are now available – These drives will have a physical sector size of 4K but the logical sector size of 512 bytes • These drives are now supported on vSphere 6.5 for VMFS and RDM (Raw Device Mappings) 8#SER1143BU CONFIDENTIAL
  • 9. 512n/512e • # esxcli storage core device capacity list 9 [root@esxi-dell-e:~] esxcli storage core device capacity list Device Physical Logical Logical Size Format Type Blocksize Blocksize Block Count ------------------------------------ --------- --------- ----------- ---------- ----------- naa.624a9370d4d78052ea564a7e00011014 512 512 20971520 10240 MiB 512n naa.624a9370d4d78052ea564a7e00011015 512 512 20971520 10240 MiB 512n naa.624a9370d4d78052ea564a7e00011138 512 512 1048576000 512000 MiB 512n naa.624a9370d4d78052ea564a7e00011139 512 512 1048576000 512000 MiB 512n naa.55cd2e404c31fa00 4096 512 390721968 190782 MiB 512e naa.500a07510f86d6bb 4096 512 1562824368 763097 MiB 512e naa.500a07510f86d685 4096 512 1562824368 763097 MiB 512e naa.5001e820026415f0 512 512 390721968 190782 MiB 512n 512e (emulated) 512n (native) #SER1143BU CONFIDENTIAL
  • 10. DSNRO The setting “Disk.SchedNumReqOutstanding” aka “No of outstanding IOs with competing worlds” has changed in behavior • DSRNO can be set to a maximum of: – 6.0 and earlier: 256 – 6.5 and on: Whatever the HBA Device Queue Depth Limit is • Allows for extreme levels of performance 10#SER1143BU CONFIDENTIAL
  • 12. VMFS-6: On-disk Format Changes • File System Resource Management - File Block Format • VMFS-6 has two new “internal” block sizes, small file block (SFB) and large file block (LFB) – The SFB size is set to 1MB; the LFB size is set to 512MB – These are internal concepts for ”files” only; the VMFS block size is still 1MB • Thin disks created on VMFS-6 are initially backed with SFBs • Thick disks created on VMFS-6 are allocated LFBs as much as possible – For the portion of the thick disk which does not fit into an LFB, SFBs are allocated • These enhancements should result in much faster file creation times – Especially true with swap file creation so long as the swap file can be created with all LFBs – Swap files are always thickly provisioned 12 VMFS-6 #SER1143BU CONFIDENTIAL
  • 13. VMFS-6: On-disk Format Changes • Dynamic System Resource Files • System resource files (.fdc.sf, .pbc.sf, .sbc.sf, .jbc.sf) are now extended dynamically for VMFS-6 – Previously these were static in size – These may show a much smaller size initially, when compared to previous versions of VMFS, but they will grow over time • If the filesystem exhausts any resources, the respective system resource file is extended to create additional resources • VMFS-6 can now support millions of files / pointer blocks / sub blocks (as long as volume has free space) 13 VMFS-6 #SER1143BU CONFIDENTIAL
  • 14. vmkfstools – 500GB VMFS-5 Volume #SER1143BU CONFIDENTIAL 14 # vmkfstools –P –v10 /vmfs/devices/<device id>
  • 15. vmkfstools – 500GB VMFS-6 volume 15 Large file blocksIn VMFS-6, Sub Blocks for are used for Pointer Blocks. That's why Ptr Blocks max is shown as 0 here #SER1143BU CONFIDENTIAL
  • 16. VMFS-6: On-disk Format Changes • File System Resource Management - Journaling • VMFS is a distributed journaling filesystem • Journals are used on VMFS when performing metadata updates on the filesystem • Previous versions of VMFS used regular file blocks as journal resource blocks • In VMFS-6, journal blocks tracked in a separate system resource file called .jbc.sf. • Introduced to address VMFS journal related issues on previous versions of VMFS, due to the use of regular files blocks as journal blocks and vice-versa – E.g. full file system, see VMware KB article 1010931 16 VMFS-6 #SER1143BU CONFIDENTIAL
  • 17. New Journal System File Resource 17 VMFS-5 VMFS-6 #SER1143BU CONFIDENTIAL
  • 18. VMFS-6: VM-based Block Allocation Affinity • Resources for VMs (blocks, file descriptors, etc.) on earlier VMFS versions were allocated on a per host basis (host-based block allocation affinity) • Host contention issues arose when a VM/VMDK was created on one host, and then vMotion was used to migrate the VM to another host • If additional blocks were allocated to the VM/VMDK by the new host at the same time as the original host tried to allocate blocks for a different VM in the same resource group, the different hosts could contend for resource locks on the same resource • This change introduces VM-based block allocation affinity, which will decrease resource lock contention 18 VMFS-6 #SER1143BU CONFIDENTIAL
  • 19. VMFS-6: Parallelism/Concurrency Improvements • Some of the biggest delays on VMFS were in device scanning and filesystem probing • vSphere 6.5 has new, highly parallel, device discovery and filesystem probing mechanisms – Previous versions of VMFS only allowed one transaction at a time per host on a given filesystem; VMFS-6 supports multiple, concurrent transactions at a time per host • These improvements are significant for fail-over event, and Site Recover Manager (SRM) should especially benefit • Requirement to support higher limits on number of devices and paths in vSphere 6.5 19 VMFS-6 #SER1143BU CONFIDENTIAL
  • 20. Hot Extend Support • Prior to ESXi 6.5, VMDKs on a powered on VM could only be grown if size was less than 2TB • If the size of a VMDK was 2TB or larger, or the expand operation caused it to exceed 2TB, the hot extend operation would fail • This required administrators to typically shut down the virtual machine to expand it beyond 2TB • The behavior has been changed in vSphere 6.5 and hot extend no longer has this limitation 20 This is a vSphere 6.5 improvement, not specific to VMFS-6. This will also work on VMFS-5 volumes. #SER1143BU CONFIDENTIAL
  • 21. “Upgrading” to VMFS-6 • No direct ‘in-place’ upgrade of filesystem to VMFS-6 available. New datastores only. • Customers upgrading to vSphere 6.5 release should continue to use VMFS-5 datastores (or older) until they can create new VMFS-6 datastores • Use migration techniques such as Storage vMotion to move VMs from the old datastore to the new VMFS-6 datastore 21#SER1143BU CONFIDENTIAL
  • 22. 22 VMFS-6 Performance Improvements with resignature (discovery and filesystem probing) #SER1143BU CONFIDENTIAL
  • 24. VAAI vSphere APIs for Array Integration
  • 25. ATS Miscompare Handling (1 of 3) • The heartbeat region of VMFS is used for on-disk locking • Every host that uses the VMFS volume has its own heartbeat region • This region is updated by the host on every heartbeat • The region that is updated is the time stamp, which tells others that this host is alive • When the host is down, this region is used to communicate lock state to other hosts 25 ATS #SER1143BU CONFIDENTIAL
  • 26. ATS Miscompare Handling (2 of 3) • In vSphere 5.5 U2, we started using ATS for maintaining the heartbeat • ATS is the Atomic Test and Set primitive which is one of the VAAI primitives • Prior to this release, we only used ATS when the heartbeat state changed • For example, we would use ATS in the following cases: – Acquire a heartbeat – Clear a heartbeat – Replay a heartbeat – Reclaim a heartbeat • We did not use ATS for maintaining the ‘liveness’ of a heartbeat • This change for using ATS to maintain ‘liveness’ of a heartbeat appears to have led to issues for certain storage arrays 26 ATS #SER1143BU CONFIDENTIAL
  • 27. ATS Miscompare Handling (3 of 3) • When an ATS Miscompare is received, all outstanding IO is aborted • This led to additional stress and load being placed on the storage arrays – In some cases, this led to the controllers crashing on the array • In vSphere 6.5, there are new heuristics added so that when we get a miscompare event, we retry the read and verify that there is a miscompare • If the miscompare is real, then we do the same as before, i.e. abort outstanding I/O • If the on-disk HB data has not changed, then this is a false miscompare • In the event of a false miscompare: – VMFS will not immediately abort IOs – VMFS will re-attempt ATS HB after a short interval (usually less than 100ms) 27 ATS #SER1143BU CONFIDENTIAL
  • 28. An Introduction to UNMAP UNMAP via datastore • VAAI UNMAP was introduced in vSphere 5.0 • Enables ESXi host to inform the backing storage that files or VMs had be moved or deleted from a Thin Provisioned VMFS datastore • Allows the backing storage to reclaim the freed blocks • No way of doing this previously, resulting in stranded space on Thin Provisioned VMFS datastores 28 UNMAP #SER1143BU CONFIDENTIAL
  • 29. Automated UNMAP in vSphere 6.5 Introducing Automated UNMAP Space Reclamation • In vSphere 6.5, there is now an automated UNMAP crawler mechanism for reclaiming dead or stranded space on VMFS datastores • Now UNMAP will run continuously in the background • UNMAP granularity on the storage array – The granularity of the reclaim is set to 1MB chunk – Automatic UNMAP is not supported on arrays with UNMAP granularity greater than 1MB – Auto UNMAP feature support is footnoted in the VMware Hardware Compatibility Guide (HCL) 29 UNMAP #SER1143BU CONFIDENTIAL
  • 30. Some Considerations with Automated UNMAP • Only is issued to VMFS datastores that are VMFS-6 and have powered-on VMs • Can take 12-24 hours to fully reclaim • Default behavior is turned on, but can be turned off on the host (host won’t participate) … – EnableVMFS6Unmap • …or on the datastore (no hosts will reclaim it) 30 UNMAP #SER1143BU CONFIDENTIAL
  • 31. An Introduction to Guest OS UNMAP UNMAP via Guest OS • In vSphere 6.0, additional improvements to UNMAP facilitate the reclaiming of stranded space from within a Guest OS • Effectively, ability for a Guest OS in a thinly provisioned VM to tell the backing storage that blocks • Backing storage to reclaim this capacity, and shrink the size of the VMDK 31 UNMAP #SER1143BU CONFIDENTIAL
  • 32. Some Considerations with Automated UNMAP TRIM Handling • UNMAP work at certain block boundaries on VMFS, whereas TRIM does not have such restrictions • While this should be fine on VMFS-6, which is now 4K aligned, certain TRIMs converted into UNMAPs may fail due to block alignment issues on previous versions of VMFS Linux Guest OS SPC-4 support • Initially in-guest UNMAP support to reclaim in-guest dead space natively was limited to Windows 2012 R2 • Linux distributions check the SCSI version, and unless it is version 5 or greater, it does not send UNMAPs • With SPC-4 support introduced in vSphere 6.5, Linux Guest OS’es will now also be able to issue UNMAPs 32 UNMAP #SER1143BU CONFIDENTIAL
  • 33. Automated UNMAP Limits and Considerations Guest OS filesystem alignment • VMDK alignment is aligned on 1 MB block boundaries • However un-alignment may still occur within the guest OS filesystem • This may also prevent UNMAP from working correctly • A best practice is to align guest OS partitions to the 1MB granularity boundary 33 UNMAP #SER1143BU CONFIDENTIAL
  • 34. Known Automated UNMAP Issues • vSphere 6.5 – Tools in guest operating system might send unmap requests that are not aligned to the VMFS unmap granularity. – Such requests are not passed to the storage array for space reclamation. – Further info in KB article 2148987 – This issue is addressed in vSphere 6.5 P01 • vSphere 6.5 P01 – Certain versions of Windows Guest OS running in a VM may appear unresponsive if UNMAP is used. – Further info in KB article 2150591. – This issue is addressed in vSphere 6.5 U1. 34 UNMAP #SER1143BU CONFIDENTIAL
  • 38. The Storage Policy Based Management (SPBM) Paradigm • SPBM is the foundation of VMware's Software Defined Storage vision • Common framework to allow storage and host related capabilities to be consumed via policies. • Applies data services (e.g. protection, encryption, performance) on a per VM, or even per VMDK level 38#SER1143BU CONFIDENTIAL
  • 39. Creating Policies via Rules and Rule Sets • Rule – A Rule references a combination of a metadata tag and a related value, indicating the quality or quantity of the capability that is desired – These two items act as a key and a value that, when referenced together through a Rule, become a condition that must be met for compliance • Rule Sets – A Rule Set is comprised of one or more Rules – A storage policy includes one or more Rule Sets that describe requirements for virtual machine storage resources – Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative selection parameters, even from several storage providers 39#SER1143BU CONFIDENTIAL
  • 41. 41 SPBM and Common Rules for Data Services provided by hosts - VM Encryption - Storage I/O Control v2 #SER1143BU CONFIDENTIAL
  • 42. 42 2 new features introduced with vSphere 6.5 - Encryption - Storage I/O Control v2 Implementation is done via I/O Filters #SER1143BU CONFIDENTIAL
  • 43. #SER1143BU CONFIDENTIAL Storage I/O Control v2 • VM Storage Policies in vSphere 6.5 has a new option called “Common Rules” • These are used for configuring data services provided by hosts, such as Storage I/O Control and Encryption. It is the same mechanism used for VAIO/IO Filters 43
  • 44. vSphere VM Encryption • vSphere 6.5 introduces a new VM encryption mechanism • It requires an external Key Management Server (KMS). Check the HCL for supported vendors • This encryption mechanism is implemented in the hypervisor, making vSphere VM encryption agnostic to the Guest OS • This not only encrypts the VMDK, but it also encrypts some of the VM Home directory contents, e.g. VMX file, metadata files, etc. • Like SIOCv2, vSphere VM Encryption in vSphere 6.5 is policy driven 44#SER1143BU CONFIDENTIAL
  • 45. #SER1143BU CONFIDENTIAL vSphere VM Encryption I/O Filter • Common rules must be enabled add vSphere VM Encryption to a policy. • Only setting in the custom encryption policy is to allow I/O filters before encryption. 45
  • 46. 46 VM Encryption and SIOC policy #SER1143BU CONFIDENTIAL
  • 49. NFS v4.1 Improvements • Hardware Acceleration/VAAI-NAS Improvements – NFS 4.1 client in vSphere 6.5 supports hardware acceleration by offloading certain operations to the storage array. – This comes in the form of a plugin to the ESXi host that is developed/provided by the storage array partner. – Refer to your NAS storage array vendor for further information. • Kerberos IPv6 Support – NFS v4.1 Kerberos adds IPV6 support in vSphere 6.5. • Kerberos AES Encryption Support – NFS v4.1 Kerberos adds Advanced Encryption Standards (AES) encryption support in vSphere 6.5 49 NFSv4.1 #SER1143BU CONFIDENTIAL
  • 51. iSCSI Enhancements • ISCSI Routing and Port Binding – ESXi 6.5 now supports having the iSCSI initiator and the iSCSI target residing in different network subnets with port binding • UEFI iSCSI Boot – VMware now supports UEFI (Unified Extensible Firmware Interface) iSCSI Boot on Dell 13th generation servers with Intel x540 dual port Network Interface Card (NIC). 51 iSCSI #SER1143BU CONFIDENTIAL
  • 53. NVMe (1 of 2) • Virtual NVMe Device – New virtual storage HBA for all flash SAN/vSAN storages – New Operating Systems now leverage multiple queues with NVMe devices • Virtual NVMe device allows VMs to take advantage of such in-guest IO stack improvements – Improved performance compared to Virtual SATA device on local PCIe SSD devices • Virtual NVMe device provides 30-50% lower CPU cost per I/O • Virtual NVMe device achieve 30-80% higher IOPS 53#SER1143BU CONFIDENTIAL
  • 54. NVMe (2 of 2) • Supported configuration information of virtual NVMe device. 54 Number of Controllers per VM 4 Enumerated as nvme0,…, nvme3. Number of namespaces per controller 15 Each namespace is mapped to a virtual disk. Enumerated as nvme0:0, …, nvme0:15 Maximum queues and interrupts 16 1 admin + 15 I/O queues Maximum queue depth 256 4K in-flight commands per controller • Supports NVMe Specification v1.0e mandatory admin and I/O commands • Interoperability with all existing vSphere features, except SMP-FT #SER1143BU CONFIDENTIAL
  • 55.

Editor's Notes

  1. This section will describe the fundamentals behind vSAN’s architecture.
  2. VAAI – vSphere API for Array Integration SPBM – Storage Policy Based Management
  3. The ability to address > 256 devices on a single target requires a new flat addressing scheme.
  4. Sector Readiness As part of future-proofing, all metadata on VMFS-6 is aligned on 4KB blocks. VMFS-6 is ready to fully support the new, larger capacity, 4KN sector disk drives when vSphere supports them.
  5. This change makes a lot of sense, since the DQLEN was always the min of DSNRO and HBA Device Queue Depth Limit, why allow someone to make DSNRO larger? And why not larger than 256?
  6. There is no way to select the block size. It is always 1MB. The SFB and LFB are internal concepts and are used automatically based on the type of VMDK. For ex: LZT and EZT will always use LFB's as long they are available for allocation. In addition, the swap files would also use LFB's. LFB's help in reducing the provisioning time and VM boot time (b'cos of swap file creation will be faster for large VM's).  Thin vmdk's always use SFB's. The VMFS volume has to be reasonably big for LFB's to get allocated
  7. Idea here is that we can create volumes that are small in capacity, but do not consume a huge amount of overhead for metadata. The other reason for this is so that we do not cap the maximum capacity of a volume with its initial formatting size, but grow it dynamically over time (future-proofing).
  8. Note 1: VMFS-5 Note 2: 1MB file blocks Note 3: Number of sub blocks created is 32000 (these are 8K in size) on VMFS-5
  9. Note 1: VMFS file blocks are still 1MB on VMFS-6 Note 2: Large File Blocks are not displayed in the vmkfstools Note 3: Pointer blocks have been replaced with Sub Blocks Note 4: Much fewer (about half) the number of sub blocks on this newly create dVMFS06, which is the same size as the VMFS-5 volume created earlier. Note 5: Sub-blocks went from 64K on VMFS-3 to 8K on VMFS-5 and back to 64K on VMFS-6 Note 6: Sub-blocks back a file initially, but when size is > 1 sub-block, we switch to file blocks for backing - https://blogs.vmware.com/vsphere/2012/02/something-i-didnt-known-about-vmfs-sub-blocks.html
  10. VMFS allocates space for journal when the file system is first accessed. Note that the journal resource file can also be dynamically extended. Tracking journal blocks separately in a new resource file reduces the risk of issues arising due to journal blocks being interpreted as regular file blocks. Note: As I understand it, if a filesystem filled, one could not even delete a file as this would require a journal block allocated to do the metadata operation. By moving to its own resource file, one should now be able to create a journal block to enable the delete operation, making it so much easier to deal with full filesystem issues.
  11. In VMFS, every datastore gets its own hidden files to save the file-system structure. .fbb – file blocks .fdc – file descriptors .pbc – pointer blocks .pb2 – pointer blocks – second level of indirection .sbc – sub-blocks .vh – volume header .sdd = System Data Directory
  12. It is important to note that this does not require VMFS-6 or Virtual Machine HW version 13 to work. VMFS-5 will also support this functionality as long as ESXi is version at 6.5.
  13. ATS is a replacement lock mechanism for SCSI reservations on VMFS volumes when doing metadata updates. Basically ATS locks can be considered as a mechanism to modify a disk sector, which when successful, allow an ESXi host to do a metadata update on a VMFS. This includes allocating space to a VMDK during provisioning, as certain characteristics would need to be updated in the metadata to reflect the new size of the file. ATS is a standard T10 and uses opcode 0×89 (COMPARE AND WRITE). We always do ATS with 1 LBA size, so, that is 1K in total in each ATS command. 512b for test data +  512b for set data. What are we checking? We are checking the ATS data which happens to be entire HB data which is also in Test-Image and Set-Image. Not just one of the fields. If any field is different, then we need to consider doing HB Reclaim.
  14. Mains issues: - The storage returning "ATS MISCOMPARE IN-CORRECTLY". In this scenario, we do not know why the storage returned the mis-compare. It is the storage-side issue. One storage-vendor said, it is b'cos the storage was overloaded. The other said, due to existing reservation on the LUN etc. VMFS detected the mis-compare in-correctly. In this case, an HB I/O (1) got timed-out and VMFS aborted that I/O, however, before aborting the I/O, it made it to the disk. Then, VMFS re-tried the ATS using the the same test-image as (1) (b'cos the previous one was aborted, the assumption was that the ATS didn't made it to the disk) and since the ATS made it to the disk before the abort, storage returned "ATS MISCOMPARE". Some storage arrays wrote the ATS data, yet, returned mis-compare. So we had to handle this case too.
  15. Basically, when we get Mis-compare on an ATS HB, we read the HB image from the disk and compare it against both Test-Image and Set-Image which was used for ATS command that resulted in mis-compare. It’s the entire HB slot (memcmp  512 bytes on VMFS-5 and 4K on VMFS-6)
  16. TRIM is the ATA equivalent of SCSI UNMAP. A TRIM operation gets converted to UNMAP in the I/O stack, which is SCSI. However, there are some issues with TRIM getting converted into UNMAP.
  17. TRIM is the ATA equivalent of SCSI UNMAP. A TRIM operation gets converted to UNMAP in the I/O stack, which is SCSI. However, there are some issues with TRIM getting converted into UNMAP.
  18. Storage Policy-Based Management (SPBM) is the foundation of the VMware SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom, whether using vSAN or Virtual Volumes (VVols) on external storage arrays. SPBM provides a single unified control plane across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines. SPBM is about ease, and agility. Traditional architectural models relied heavily on the capabilities of an independent storage system in order to meet protection and performance requirements of workloads. Unfortunately the traditional model was overly restrictive in part because standalone hardware based storage solutions were not VM aware, and were limited in their abilities to unique settings to various workloads. Storage Policy Based Management (SPBM) lets you define requirements for VMs or collection of VMs. This SPBM framework is the same framework used for storage arrays supporting VVOLs. Therefore, a common approach to managing and protecting data can be employed, regardless of the backing storage. ---------------------------------- Overview: Key to software defined storage (SDS) architectural model SPBM is the common framework to abstract traditional storage related settings away from hardware, and into hypervisor Applies storage related settings for protection and performance on a per VM, or even per VMDK level ----------------------------------
  19. https://blogs.vmware.com/vsphere/2014/10/vsphere-storage-policy-based-management-overview-part-2.html
  20. Common Rules – these come from I/O Filters on hosts (VMCrypt, SIOCv2, VAIO) Rule-Sets come from storage, either vSAN or VVOls.
  21. Available since vSphere 6.5.
  22. This is before I added the I/O Accelerator from Infinio. These are provided by default in vSphere.
  23. When the policy has been created, it may be assigned to newly deployed VMs during provisioning, or to already existing VMs by assigning this new policy to the whole VM (or just an individual VMDK) by editing its settings. One thing to note is that IO Filter based IOPS does not look at the size of the IO. For example, there is no normalization so that a 64K IOP is not equal to 2 x 32K IOPS. It is a fixed value of IOPS irrespective of the size of the IO. In this initial release of SIOC V2 in vSphere 6.5, there is no support for vSAN or Virtual Volumes. SIOC v2 is only supported with VMs that run on VMFS and NFS datastores. SIOCV2 policies override SIOCV1 policies.
  24. VAAI-NAS component has four primitives: Full file clone, Fast file clone, Extended Statistics and Reserve space. The following Advanced Encryption Standards (AES) are now supported: AES256-CTS-HMAC-SHA1-96 AES128-CTS-HMAC-SHA1-96 The DES-CBC-MD5 encryption type is not supported with NFSv4.1 in vSphere 6.5. Kerberos Integrity SEC_KRB5I Support  Kerberos Integrity is a new feature in 6.5. vSphere 6.5 introduces Kerberos Integrity SEC_KRB5I. This feature uses checksum to protect NFS data.
  25. Support the routing of iSCSI connections and sessions/Leverage separate gateways per VMkernel interface/Use port binding to reach targets in different subnets VMware now supports UEFI (Unified Extensible Firmware Interface) iSCSI Boot on Dell 13th generation servers with Intel x540 dual port Network Interface Card (NIC). On the System BIOS select Network Settings, followed by UEFI iSCSI Device Settings. In the Connection Settings, you need to populate initiator and target settings, as well as any appropriate VLAN and CHAP settings if required. This device will now appear in the list of options in the UEFI Boot Settings. The NIC Configuration must then have its Legacy Boot Protocol set to iSCSI Primary, and also be populated with initiator and target settings. You can now install your ESXi image and use any of the LUNs from the iSCSI target to install to. Subsequent reboots will boot from the ESXi image on the iSCSI LUN