Latest imported feed items on VMware Blogs https://blogs.vmware.com <![CDATA[VMware Integrated OpenStack 4.1 Released]]> http://vcdx56.com/2018/01/vmware-integrated-openstack-4-0-released/ http://vcdx56.com/2018/01/vmware-integrated-openstack-4-0-released/ Sat, 20 Jan 2018 21:29:05 +0000 Continue reading »

]]>
Continue reading »

]]>
VCDX56
<![CDATA[VMware Integrated OpenStack 4.1 Released]]> http://vcdx56.com/2018/01/vmware-integrated-openstack-4-0-released/ http://vcdx56.com/2018/01/vmware-integrated-openstack-4-0-released/ Sat, 20 Jan 2018 21:29:05 +0000 Continue reading »

]]>
Continue reading »

]]>
VCDX56
<![CDATA[VCSA 6.5 Fails to Boot]]> https://www.codyhosterman.com/2018/01/vcsa-6-5-fails-to-boot/ https://www.codyhosterman.com/2018/01/vcsa-6-5-fails-to-boot/ Sat, 20 Jan 2018 21:04:03 +0000 Continue reading VCSA 6.5 Fails to Boot ]]> Continue reading VCSA 6.5 Fails to Boot ]]> Cody Hosterman <![CDATA[NSX Layer 7 Application aware Distributed Firewall]]> http://feedproxy.google.com/~r/M80arm-VirtualizationWarrior/~3/Jszo2P20SI4/nsx-layer-7-application-aware.html http://feedproxy.google.com/~r/M80arm-VirtualizationWarrior/~3/Jszo2P20SI4/nsx-layer-7-application-aware.html Sat, 20 Jan 2018 19:45:00 +0000 here.

So, let's test the new application aware context feature of the distributed firewall.  I've got a test VM called WEB01 with an IP address of 192.168.1.11 which I can currently SSH to successfully over TCP/22:


I've created a new DFW rule to block traffic from ANY to ANY and the service as SSH (TCP/22):


I can now no longer access WEB01 from anywhere:


However, if I was to change the port the SSH daemon running on WEB01 was listening on from TCP/22 to TCP/8080 and restart the SSH daemon I can successfully SSH back into WEB01:



With NSX 6.4 and the new application context firewall rules we can modify the rule to block the SSH application rather than TCP/22.  These are available from the service list and are prefixed with APP_:



Now if I try and connect again to WEB01 over TCP/8080 or TCP/22 traffic is blocked:



The full list of layer 7 protocols that are currently supported in NSX 6.4 are:





]]>
here.

So, let's test the new application aware context feature of the distributed firewall.  I've got a test VM called WEB01 with an IP address of 192.168.1.11 which I can currently SSH to successfully over TCP/22:


I've created a new DFW rule to block traffic from ANY to ANY and the service as SSH (TCP/22):


I can now no longer access WEB01 from anywhere:


However, if I was to change the port the SSH daemon running on WEB01 was listening on from TCP/22 to TCP/8080 and restart the SSH daemon I can successfully SSH back into WEB01:



With NSX 6.4 and the new application context firewall rules we can modify the rule to block the SSH application rather than TCP/22.  These are available from the service list and are prefixed with APP_:



Now if I try and connect again to WEB01 over TCP/8080 or TCP/22 traffic is blocked:



The full list of layer 7 protocols that are currently supported in NSX 6.4 are:





]]>
Virtualization Warrior
<![CDATA[vROPS 6.6 Sizing Guidelines Worksheet]]> http://feedproxy.google.com/~r/Virtualization24x7/~3/VbGz-oaLTTE/vrops-66-sizing-guidelines-worksheet.html http://feedproxy.google.com/~r/Virtualization24x7/~3/VbGz-oaLTTE/vrops-66-sizing-guidelines-worksheet.html Sat, 20 Jan 2018 10:20:00 +0000 Access KB 2150421 from <<<<>>>>. Scroll Down till the end of KB article and there you will find link for downloading the attachment under attachment category.

https://kb.vmware.com/s/article/2150421
 ]]>
Access KB 2150421 from <<<<>>>>. Scroll Down till the end of KB article and there you will find link for downloading the attachment under attachment category.

https://kb.vmware.com/s/article/2150421
 ]]>
Virtualization The Future
<![CDATA[vRealize Operations Manager 6.6 and 6.6.1 Sizing Guidelines (2150421)]]> http://feedproxy.google.com/~r/Virtualization24x7/~3/Sj7aaXIiS_E/vrealize-operations-manager-66-and-661.html http://feedproxy.google.com/~r/Virtualization24x7/~3/Sj7aaXIiS_E/vrealize-operations-manager-66-and-661.html Sat, 20 Jan 2018 10:10:00 +0000
By default, VMware offers Extra Small, Small, Medium, Large, and Extra Large configurations during installation. You can size the environment according to the existing infrastructure to be monitored. After the vRealize Operations Manager instance outgrows the existing size, you must expand the cluster to add nodes of the same size.
Characteristics/
Node Size
Extra
Small
SmallMediumLargeExtra Large
Standard Size Remote
Collectors
Large Size Remote Collectors
vCPU248162424
Memory (GB)8163248128416
Maximum Memory Configuration (GB)N/A326496N/A832
Datastore latencyConsistently lower than 10 ms with possible occasional peaks up to 15 ms
Network latency for data nodes< 5 ms
Network latency for remote collectors< 200 ms
Network latency between agents and vRealize Operations Manager nodes and remote collectors< 20 ms
vCPU: Physical core ratio for data nodes (*)1 vCPU to 1 physical core at scale maximums
IOPSSee the attached Sizing Guidelines worksheet for details.
Disk Space
Single-Node Maximum Objects2502,4008,50015,00035,0001,500 (****)15,000 (****)
Single-Node Maximum Collected Metrics (**)70,000800,0002,500,0004,000,00010,000,000600,0004,375,000
Multi-Node Maximum Objects Per Node (***)NA2,0006,25012,50030,000NANA
Multi-Node Maximum Collected Metrics Per Node (***)NA700,0001,875,0003,000,0007,500,000NANA
Maximum number of nodes in a cluster12161665050
Maximum number of End Point Operations Management agents per node1003001,2002,5002,5002502,000
Maximum Objects for the configuration with the maximum supported number of nodes (***)250400075,000150,000180,000NANA
Maximum Metrics for the configuration with the maximum supported number of nodes(***)70,0001,400,00019,000,00037,500,00045,000,000NANA
 
(*) It is critical to allocate enough CPU resources for environments running at scale maximums to avoid performance degradation. Refer to the vRealize Operations Manager Cluster Node Best Practices in the vRealize Operations Manager 6.6 Help for more guidelines regarding CPU allocation.
(**) Metric numbers reflect the total number of metrics that are collected from all adapter instances in vRealize Operations Manager. To get this number, you can go to the Cluster Management page in vRealize Operations Manager, and view the adapter instances of each node at the bottom of the page. You can get the number of metrics collected by each adapter instance. The sum of these metrics is what is estimated in this sheet. Note: The number shown in the overall metrics on the Cluster Management page reflects the metrics that are collected from different data sources and the metrics that vRealize Operations Manager creates.
(***) In large, 16-node configurations, note the reduction in maximum metrics to permit some head room. This adjustment is accounted for in the calculations.
(****) The object limit for the remote collector is based on the VMware vCenter adapter.

What's new with vRealize Operations 6.6.x sizing:
Monitor larger environment with improved scale: The vRealize Operations 6.6 cluster can handle 6 Extra Large Nodes in a cluster which can support up to 180,000 Objects and 45,000,000 metrics.
Monitor more vCenter Servers: A single instance of vRealize Operations can now monitor up to 60 vCenter Servers.
Deploy larger sized nodes: A large Remote Collector can support up to 15000 objects.
Notes:
  • Maximum number of remote collectors (RC) certified: 50.
  • Maximum number of VMware vCenter adapter instances certified: 60.
  • Maximum number of VMware vCenter adapter instances that were tested on a single collector: 40.
  • Maximum number of certified concurrent users: 200.
  • This maximum number of concurrent users is achieved on a system configured with the objects and metrics at 50% of supported maximums (For example: 4 large nodes with 20K objects or 7 nodes medium nodes with 17.5K objects). When the cluster is running with the nodes filled with objects or metrics at maximum levels, then the maximum number of concurrent users is 5 users per node (For example: 16 nodes with 150K objects can support 80 concurrent users).
  • Maximum number of the End Point Operations Management agents per cluster certified - 10,000 on 4 large nodes cluster.
  • When High Availability (HA) is enabled, each object is replicated in one of the nodes of a cluster. The limit of objects for HA based environment is half compared to a non-HA configuration. vRealize Operations HA supports only one node failure and you can avoid Single-point-of-failure by adding vRealize Operations cluster nodes into different hosts in the cluster.
  • An object in this table represents a basic entity in vRealize Operations Manager that is characterized by properties and metrics that are collected from adapter data sources. Example of objects include a virtual machine, a host, a datastore for a VMware vCenter adapter, a storage switch port for a storage devices adapter, an Exchange server, a Microsoft SQL Server, a Hyper-V server, or Hyper-V virtual machine for a Hyperic adapter, and an AWS instance for a AWS adapter.
  • The limitation of a collector per node: The object or metric limit of a collector is the same as the scaling limit of objects per node. The collector process on a node will support adapter instances where the total number of resources is not more than 2,400, 8,500, and 15,000 respectively, on a small, medium, and large multi-node vRealize Operations Manager cluster. For example, a 4-node system of medium nodes will support a total of 25,000 objects. However, if an adapter instance needs to collect 8,000 objects, a collector that runs on a medium node cannot support that as a medium node can handle only 6,250 objects. In this situation, you can add a large remote collector or use a configuration that uses large nodes instead of small nodes.
  • A large node can collect more than 20,000 vRealize Operations for Horizon objects when a dedicated remote collector is used.
  • A large node can collect more than 20,000 vRealize Operations for Published Apps objects when a dedicated remote collector is used.
  • If the number of objects is close to the high-end limit, dependent on the monitored environment, increase the memory on the nodes. Contact Product Support for more details.
  • The performance of vRealize Operations Manager can be impacted by the usage of snapshots. Presence of a snapshot on the disk causes slow IO performance and high CPU costop values which degrade the performance of vRealize Operations Manager. Use snapshots minimally in the production setups.
  • The sizing guides are version specific, please use the sizing guide based on the vRealize Operations version you are planning to deploy.
    Extra small and small node are designed for test environment and POC, we do not recommend to scale-out small node more than two nodes and we do not recommend to scale out extra small node.
  • Simply scale up as you scale - Increase memory instead of configuring more nodes to monitor larger environments. We recommend to scale up to the possible maximum (default memory x 2) and then do scale out based on underlying hardware that will support the scale requirements. Example: Large node default memory requirements are 48GB and now, if needed, can be configured up to 96GB. All nodes must be scaled equally.
]]>
By default, VMware offers Extra Small, Small, Medium, Large, and Extra Large configurations during installation. You can size the environment according to the existing infrastructure to be monitored. After the vRealize Operations Manager instance outgrows the existing size, you must expand the cluster to add nodes of the same size.
Characteristics/
Node Size
Extra
Small
SmallMediumLargeExtra Large
Standard Size Remote
Collectors
Large Size Remote Collectors
vCPU248162424
Memory (GB)8163248128416
Maximum Memory Configuration (GB)N/A326496N/A832
Datastore latencyConsistently lower than 10 ms with possible occasional peaks up to 15 ms
Network latency for data nodes< 5 ms
Network latency for remote collectors< 200 ms
Network latency between agents and vRealize Operations Manager nodes and remote collectors< 20 ms
vCPU: Physical core ratio for data nodes (*)1 vCPU to 1 physical core at scale maximums
IOPSSee the attached Sizing Guidelines worksheet for details.
Disk Space
Single-Node Maximum Objects2502,4008,50015,00035,0001,500 (****)15,000 (****)
Single-Node Maximum Collected Metrics (**)70,000800,0002,500,0004,000,00010,000,000600,0004,375,000
Multi-Node Maximum Objects Per Node (***)NA2,0006,25012,50030,000NANA
Multi-Node Maximum Collected Metrics Per Node (***)NA700,0001,875,0003,000,0007,500,000NANA
Maximum number of nodes in a cluster12161665050
Maximum number of End Point Operations Management agents per node1003001,2002,5002,5002502,000
Maximum Objects for the configuration with the maximum supported number of nodes (***)250400075,000150,000180,000NANA
Maximum Metrics for the configuration with the maximum supported number of nodes(***)70,0001,400,00019,000,00037,500,00045,000,000NANA
 
(*) It is critical to allocate enough CPU resources for environments running at scale maximums to avoid performance degradation. Refer to the vRealize Operations Manager Cluster Node Best Practices in the vRealize Operations Manager 6.6 Help for more guidelines regarding CPU allocation.
(**) Metric numbers reflect the total number of metrics that are collected from all adapter instances in vRealize Operations Manager. To get this number, you can go to the Cluster Management page in vRealize Operations Manager, and view the adapter instances of each node at the bottom of the page. You can get the number of metrics collected by each adapter instance. The sum of these metrics is what is estimated in this sheet. Note: The number shown in the overall metrics on the Cluster Management page reflects the metrics that are collected from different data sources and the metrics that vRealize Operations Manager creates.
(***) In large, 16-node configurations, note the reduction in maximum metrics to permit some head room. This adjustment is accounted for in the calculations.
(****) The object limit for the remote collector is based on the VMware vCenter adapter.

What's new with vRealize Operations 6.6.x sizing:
Monitor larger environment with improved scale: The vRealize Operations 6.6 cluster can handle 6 Extra Large Nodes in a cluster which can support up to 180,000 Objects and 45,000,000 metrics.
Monitor more vCenter Servers: A single instance of vRealize Operations can now monitor up to 60 vCenter Servers.
Deploy larger sized nodes: A large Remote Collector can support up to 15000 objects.
Notes:
  • Maximum number of remote collectors (RC) certified: 50.
  • Maximum number of VMware vCenter adapter instances certified: 60.
  • Maximum number of VMware vCenter adapter instances that were tested on a single collector: 40.
  • Maximum number of certified concurrent users: 200.
  • This maximum number of concurrent users is achieved on a system configured with the objects and metrics at 50% of supported maximums (For example: 4 large nodes with 20K objects or 7 nodes medium nodes with 17.5K objects). When the cluster is running with the nodes filled with objects or metrics at maximum levels, then the maximum number of concurrent users is 5 users per node (For example: 16 nodes with 150K objects can support 80 concurrent users).
  • Maximum number of the End Point Operations Management agents per cluster certified - 10,000 on 4 large nodes cluster.
  • When High Availability (HA) is enabled, each object is replicated in one of the nodes of a cluster. The limit of objects for HA based environment is half compared to a non-HA configuration. vRealize Operations HA supports only one node failure and you can avoid Single-point-of-failure by adding vRealize Operations cluster nodes into different hosts in the cluster.
  • An object in this table represents a basic entity in vRealize Operations Manager that is characterized by properties and metrics that are collected from adapter data sources. Example of objects include a virtual machine, a host, a datastore for a VMware vCenter adapter, a storage switch port for a storage devices adapter, an Exchange server, a Microsoft SQL Server, a Hyper-V server, or Hyper-V virtual machine for a Hyperic adapter, and an AWS instance for a AWS adapter.
  • The limitation of a collector per node: The object or metric limit of a collector is the same as the scaling limit of objects per node. The collector process on a node will support adapter instances where the total number of resources is not more than 2,400, 8,500, and 15,000 respectively, on a small, medium, and large multi-node vRealize Operations Manager cluster. For example, a 4-node system of medium nodes will support a total of 25,000 objects. However, if an adapter instance needs to collect 8,000 objects, a collector that runs on a medium node cannot support that as a medium node can handle only 6,250 objects. In this situation, you can add a large remote collector or use a configuration that uses large nodes instead of small nodes.
  • A large node can collect more than 20,000 vRealize Operations for Horizon objects when a dedicated remote collector is used.
  • A large node can collect more than 20,000 vRealize Operations for Published Apps objects when a dedicated remote collector is used.
  • If the number of objects is close to the high-end limit, dependent on the monitored environment, increase the memory on the nodes. Contact Product Support for more details.
  • The performance of vRealize Operations Manager can be impacted by the usage of snapshots. Presence of a snapshot on the disk causes slow IO performance and high CPU costop values which degrade the performance of vRealize Operations Manager. Use snapshots minimally in the production setups.
  • The sizing guides are version specific, please use the sizing guide based on the vRealize Operations version you are planning to deploy.
    Extra small and small node are designed for test environment and POC, we do not recommend to scale-out small node more than two nodes and we do not recommend to scale out extra small node.
  • Simply scale up as you scale - Increase memory instead of configuring more nodes to monitor larger environments. We recommend to scale up to the possible maximum (default memory x 2) and then do scale out based on underlying hardware that will support the scale requirements. Example: Large node default memory requirements are 48GB and now, if needed, can be configured up to 96GB. All nodes must be scaled equally.
]]>
Virtualization The Future
<![CDATA[Remove a VM from a vSphere vApp]]> https://vinfrastructure.it/2018/01/remove-vm-vsphere-vapp/ https://vinfrastructure.it/2018/01/remove-vm-vsphere-vapp/ Sat, 20 Jan 2018 07:17:39 +0000 Seems that the vSphere Web Client has some bugs, also with the latest vSphere 6.5 version. It should be the main web client (see a list of possible GUI clients in vSphere 6.5), but sometimes does not work as expected (without considering possible Flash bugs). If you are using vSphere vApp (feature that require a DRS enabled cluster), you may have some issues when you need to remove a VM outside from a vApp. You will notice that all the VM inside a vApp will loose some features, at least in the vSphere Web Client […]

The post Remove a VM from a vSphere vApp appeared first on vInfrastructure Blog.

]]>
Seems that the vSphere Web Client has some bugs, also with the latest vSphere 6.5 version. It should be the main web client (see a list of possible GUI clients in vSphere 6.5), but sometimes does not work as expected (without considering possible Flash bugs). If you are using vSphere vApp (feature that require a DRS enabled cluster), you may have some issues when you need to remove a VM outside from a vApp. You will notice that all the VM inside a vApp will loose some features, at least in the vSphere Web Client […]

The post Remove a VM from a vSphere vApp appeared first on vInfrastructure Blog.

]]>
vInfrastructure Blog
<![CDATA[Portable License Unit: Hybrid Cloud Ready Licensing Metric for vRealize Suite]]> http://feedproxy.google.com/~r/Virtualization24x7/~3/Tv8cCjQY6wo/portable-license-unit-hybrid-cloud.html http://feedproxy.google.com/~r/Virtualization24x7/~3/Tv8cCjQY6wo/portable-license-unit-hybrid-cloud.html Sat, 20 Jan 2018 06:08:00 +0000 VMware is introducing Portable License Unit (PLU) for VMware vRealize® Suite that provides flexibility to deploy the same vRealize Suite license across hybrid and heterogeneous environments such as VMware vSphere®-based virtualized environment, third-party hypervisors, physical servers, VMware vCloud® Air™, and all other supported public clouds. PLU combines the benefits of managing unlimited Operating System Instances (OSIs) / Virtual Machines (VMs) on one vSphere CPU or up to 15 OSIs on a supported public cloud using the same license key.
For more info refer:-
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vrealize/vmware-portable-license-unit.pdf]]>
VMware is introducing Portable License Unit (PLU) for VMware vRealize® Suite that provides flexibility to deploy the same vRealize Suite license across hybrid and heterogeneous environments such as VMware vSphere®-based virtualized environment, third-party hypervisors, physical servers, VMware vCloud® Air™, and all other supported public clouds. PLU combines the benefits of managing unlimited Operating System Instances (OSIs) / Virtual Machines (VMs) on one vSphere CPU or up to 15 OSIs on a supported public cloud using the same license key.
For more info refer:-
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vrealize/vmware-portable-license-unit.pdf]]>
Virtualization The Future
<![CDATA[How to install VMware Tools for Easy Cross-Hypervisor Migrations]]> https://www.jpaul.me/2018/01/install-vmware-tools-easy-cross-hypervisor-migrations/ https://www.jpaul.me/2018/01/install-vmware-tools-easy-cross-hypervisor-migrations/ Sat, 20 Jan 2018 03:58:05 +0000 Overview This is a companion article to the Hyper-V Integration Services Installation article I did recently. If you are looking to move a non-VMware based machine to VMware with minimal headaches then this is the article for you. These steps were meant to be used with Zerto Virtual Replication, however, they can be used independently…

The post How to install VMware Tools for Easy Cross-Hypervisor Migrations appeared first on Justin's IT Blog.

]]>
Overview This is a companion article to the Hyper-V Integration Services Installation article I did recently. If you are looking to move a non-VMware based machine to VMware with minimal headaches then this is the article for you. These steps were meant to be used with Zerto Virtual Replication, however, they can be used independently…

The post How to install VMware Tools for Easy Cross-Hypervisor Migrations appeared first on Justin's IT Blog.

]]>
Justin’s IT Blog
<![CDATA[How to install VMware Tools for Easy Cross-Hypervisor Migrations]]> https://www.jpaul.me/2018/01/install-vmware-tools-easy-cross-hypervisor-migrations/ https://www.jpaul.me/2018/01/install-vmware-tools-easy-cross-hypervisor-migrations/ Sat, 20 Jan 2018 03:58:05 +0000 Overview This is a companion article to the Hyper-V Integration Services Installation article I did recently. If you are looking to move a non-VMware based machine to VMware with minimal headaches then this is the article for you. These steps were meant to be used with Zerto Virtual Replication, however, they can be used independently…

The post How to install VMware Tools for Easy Cross-Hypervisor Migrations appeared first on Justin's IT Blog.

]]>
Overview This is a companion article to the Hyper-V Integration Services Installation article I did recently. If you are looking to move a non-VMware based machine to VMware with minimal headaches then this is the article for you. These steps were meant to be used with Zerto Virtual Replication, however, they can be used independently…

The post How to install VMware Tools for Easy Cross-Hypervisor Migrations appeared first on Justin's IT Blog.

]]>
Justin’s IT Blog