Home > Blogs > VMware Fusion Blog > Author Archives: Michael Roy

Author Archives: Michael Roy

Michael Roy

About Michael Roy

Michael Roy is the Product Line Manager for Desktop Hypervisor products such as VMware Fusion and Workstation.

Fusion on Apple Silicon: Progress Update

It’s been a few months since our informal announcement via Twitter back in November where we committed to delivering VMware VMs on Apple silicon devices, so we wanted to take this opportunity to share a bit about how our progress with our little project to bring Fusion to life on Apple silicon Macs this year.

The quick read

Before we get right into it, I just want to summarize our position way up front with a quick tl;dr:

  • We will be delivering a Tech Preview of VMware Fusion for macOS on Apple silicon this year.
  • Development is moving along very well, meeting or exceeding our expectations, but there are challenges and much work still to do
  • We don’t plan to support installing or running x86 VMs on Macs with Apple silicon.
  • Windows is second priority behind Linux
    • Microsoft currently does not sell licenses of Windows 10 ARM for virtual machines.
    • Insider builds of Windows 10 ARM may only be installed on systems with a licensed version of Windows 10, which is currently not available on Apple hardware.
  • macOS VMs are not in scope in the short term. There are challenges there which will require Apple to work with us to resolve.

A new generation of Macs

With the introduction of Apple silicon, it was revealed that the new CPU line would be based on the same Arm CPU architecture found in an iPhone or on an iPad as opposed to the x86 or x86_64 Intel (or AMD) architectures found on desktops and notebooks. With the new architecture comes incredible performance gains, thermal improvements, and dramatically improved battery life, but poses some unique challenges for virtualization apps like Fusion Pro and Player.

With first generation of Apple silicon chips, namely the M1, Apple has made significant performance and efficacy improvements, with claims of “Up to 2.8x CPU performance; Up to 5x the graphics speed; Up to 11x faster machine learning; And up to 20 hours of battery life” on a new 13” MacBook Pro.

Seeing improvements like that, it comes as no surprise to us that when users got their hands on M1 devices they naturally wanted to run virtual machines on them! Why not take advantage of that extra CPU power and carry around a single notebook instead of 2 laptops, right? We agree.

In much the same way they did when moving from PowerPC to Intel CPUs back in 2006, Apple introduced a new version of Rosetta to support running Intel apps on Apple silicon. For the most part, apps ‘just work’, even if they’re a bit slower.

However, for those that need to run another operating system like Linux or Windows, Rosetta 2 doesn’t support Virtualization, and Apple silicon Macs don’t support Boot Camp. That means it’s time for us to innovate and rebuild our beloved desktop hypervisor for Macs, VMware Fusion, to support the next generation of Apple hardware.

Fusion’s roots

With the 2006 transition, a tiny (but incredible!) team of engineers at VMware saw an opportunity. As a side project, this small group were able to essentially rebuild Workstation to run on the Mac using Apple’s UI, thus creating the foundation of what we now know as VMware Fusion

One of the benefits our users appreciate of having older “enterprise-grade” siblings with Workstation on the desktop and ESXi in the data center is that it gives organizations a consistent operating model. A VMware VM behaves pretty much the same regardless of what product it’s running on. Developers and Operations teams can move VMs and templates between data centers, desktops, and clouds with ease. This is super important to us and to our customers, particularly as more and more operational workflows become automated.

For those who might not have studied the history of VMware, the hypervisor stack in Fusion shares much of the same code as the stack that runs in the majority of the world’s data centers: ESXi, which itself was created from breaking apart the internals of VMware Workstation into it’s functionally discrete components: Storage, Networking and Compute.

Some context

Now, we’re no stranger to Arm CPUs, having shipped what is currently a something we call a Fling with ESXi Arm Edition. Delivering ESXi for Arm has been a multi-year effort, and yet it’s still not quite a Product like ESXi on x86 currently is.

So when we learned about the M1 devices, we knew we had the in-house expertise on both the Arm team, and also on the Fusion bench, to set in motion a plan to re-invent our Mac desktop hypervisor to support this incredible new platform. Being able to build on top of what we’ve learned with our still-evolving Fling has been crucial, and thankfully we have some overlap in the teams’ history, meaning folks have exactly the right experience needed for this project. 

To support Fusion on M1 devices, while maintaining code and feature compatibility with our ecosystem, we are essentially bringing the core of these two projects together.  This is a much different task that simply shipping a single product like Fusion to say the least!

So, how’s it going?

Well, our initial assessments are going very well! For starters, we have VMs booting in a variety of Arm operating systems, and we are very impressed with the performance! 

Because of our kinship with ESXi, we have a major architectural advantage over our competition. ESXi is designed to be enterprise-grade, which includes security, resiliency and performance benefits that both Fusion and Workstation get to benefit from.

Here’s a couple of screenshots from my test desktop, a M1 MacBook Air with 8 CPU + 8GPU cores and 16GB of RAM:

You can see 7 VMs booted in the Library window, with Fedora 34 up front and Ubuntu 21.04 in the Preview window. Still runs 20 degrees (Celsius) cooler than my Intel Mac Mini
Same VMs as above but in separate windows, elegantly viewed with Expose.

You can see here that I have 7 ARM VMs booted at once… 2 are CLI only (Photon and BSD), the others are full desktops… each is configured with 4CPU and 8GB of RAM. 6 different Linux flavors and 1 FreeBSD… MacBook Air. On battery. No fans. Yep. 

Of course, just booting a bunch of VMs that are mostly idle isn’t quite a ‘real world experience’, nor is it the same as doing some of the stress testing that we perform in the leadup to a release.  Even with that said, and note that I’m using ‘debug’ builds which perform slower, in my 12 years at VMware I’ve never seen VMs boot and run like this. So we’re very encouraged by our early results, and seriously can’t wait to get it on every Apple silicon equipped Mac out there.

Sounds good, so what’s the hold up?

While booting all that at once and it being usable (which it all has been in my testing) is an impressive feat in itself, we do still have a ways to go, and some challenges along the way.

For instance, the best Linux VM experience comes by installing VMware Tools, and by and large Tools are included with every Linux distribution. Currently, open-vm-tools are not readily available on the aarch64 (Arm) platform.

Quick refresh, VMware Tools:

  • Is included by default with most Linux distributions
  • Is part of what enables the features that work between host and guest.
  • Improves VM performance
  • Provides a consistent management layer.
  • Delivers graphics drivers and ‘plumbing’ (via open-vm-tools-desktop)

The ESXi-Arm project, in addressing this gap, currently has users build open-vm-tools from source itself. That works just fine for some people, but obviously not everyone is comfortable doing that.

Because open-vm-tools is such a key building block to support the experience we want to deliver for Linux VMs on every platform we support, we’re working with various Linux upstream projects to include the necessary kernel patches to support open-vm-tools and open-vm-tools-desktop on arm64/aarch64 architectures so they can be included in OS distributions. These changes will also benefit the ESXi-Arm Fling by not having to compile Tools from source going forward, so things should ‘just work’ out of the box, as users have come to expect.

So for now, while VMs are booting, we don’t currently have things like 3D hardware accelerated graphics, and other features that require Tools which Fusion users on Intel Macs have come to expect.

That said, even without hardware 3D and while using debug-enabled-builds, we are super impressed with how well things are performing, even against the GA release of our competition.

What about Windows?

Of course, users are expecting to run Windows in a virtual machine, much like we’ve been used to for many years now. With Windows on ARM however, this presents a unique situation, particularly as it relates to Licensing. 

The Insider Preview program says: “To install Windows 10 Insider Preview Builds, you must be running a licensed version of Windows 10 on your device.” And as far as we are aware, there is no way to buy a Windows 10 ARM license for a Mac with Apple silicon. There have been plenty of discussions on the topic from users and the media, and from the Insider Download Page, it reads:

With Windows 10 on ARM Insider Preview builds, you can create 64-bit ARM (ARM64) VMs in Hyper-V on Windows 10 ARM-based PCs. Creating ARM64 VMs is not supported on x64 hardware.

ARM64 VMs are only supported on devices that meet the pre-requisites:

  • Windows 10 ARM-based PCs with a Microsoft SQ1, Microsoft SQ2, Qualcomm Snapdragon 8cx, or Qualcomm Snapdragon 850 processor
  • Windows 10 Pro or Enterprise, build 19559 or newer
  • Hyper-V enabled (instructions)

You can see it doesn’t say anything about Apple silicon. We have reached out to Microsoft for comment and clarification on the matter. 

For the time being, our work has been focused on Linux guest operating systems, and we’re confident that if Microsoft offers Windows on Arm licenses more broadly, we’ll be ready to officially support it.

What about x86 emulation?

We get asked regularly about running x86 VMs on M1 Macs. It makes total sense… If Apple can emulate x86 with Rosetta 2, surely VMware can do something too, right?

Well, the short answer is that there isn’t exactly much business value relative to the engineering effort that is required, at least for the time being. For now, we’re laser focused on making Arm Linux VMs on Apple silicon a delight to use.

So, to be a bit blunt, running x86 operating systems on Apple silicon is not something we are planning to deliver with this project. Installing Windows or Linux from an x86 ISO, for example, will not work.

Why not? Let’s explore:

  • For Windows for Arm, Microsoft has an evolving x86 emulation layer within the OS itself
  • For cloud, OCI multi-arch containers can be built with both Arm and x86 layers in a single image from the same build process.
  • For Linux, there are only a handful of apps that haven’t yet been cross-compiled for Arm
  • There are already great open source tools whose primary function is x86 emulation (i.e. Qemu)

More personally speaking, I really don’t think the next era of Macs will be defined by “switchers” in the same way the previous one was. I expect this platform will be one to more rapidly introduce new experiences at the expense of cutting away from the past. Where we’re headed is anyone’s guess, but I am confident the direction we’re moving isn’t backwards.

That being said, we’re always looking to broaden our horizon with respect to use cases, so if there’s something we’re missing, give us (or me) a shout, we’re eager to hear about it.

When can we get it?

The beta that started it all… now 15 years young!

Ah yes, the burning question: When can everyone get their hands on a tech preview?

We’re working diligently to get VMware VMs on Apple silicon and making great progress as you can see, but we would be remiss if our product did something unexpected or unsafe to any computer it was installed on. Even for a Tech Preview there’s a good deal of QA/QE work still to be done as we continue to add code to bring features online.

We’re the leaders of virtualization in the enterprise with SDDC stacks like VCF because we consistently deliver a high measure of quality, security and performance across all our products. It’s not because we shipped first, it’s because we ship when it’s ready.

That said, the team is planning to deliver a Public Tech Preview of VMware Fusion for macOS on Apple silicon before the end of this year, and we can’t wait to get it in the hands of every Apple silicon Mac owner.

Wrapping Up

If you’ve made it this far, the team and I appreciate it! The world is a bit crazy right now, but there are great things coming. Thanks for your understanding and patience as we take the time to get it right amidst all of the challenges of today.

We really couldn’t be more excited about not only the development progress, but for life with Fusion on Apple silicon. The hardware is incredible, and if our early testing is any indication, users are going to be very happy running VMware virtual machines on the latest Macs of today the future.

Keep an eye out in the VMTN community or follow us on Twitter for our Tech Preview announcement in the coming months!

And please consider donating to the charities above to support our Indian friends and colleagues affected by the global pandemic. <3

May 3 – this blog was updated to reflect that the timeframe to deliver this offering is not currently impacted by the COVID situation. That said, for folks looking to help with the relief efforts in India, please check out the 2021 India COVID-19 Recovery Fund.

Black Friday Sale is On!

Black Friday banner graphic

It’s that time of year again!

The biggest sale of the year this time around offers up to 30% discount on your favorite Desktop Hypervisor products: VMware Fusion Pro and VMware Fusion Player!

Upgrades have never been cheaper, with 20% discount on top of the already reduced 2020 pricing!

Upgrade from Fusion 10 or Fusion 11 ‘standard’ to Fusion 12 Player (commercial license) for only $63! (USD)
Or upgrade your copy of Fusion 10 Pro or Fusion 11 Pro to the ultimate: Fusion 12 Pro for only $80! (USD)

Black Friday Sale discounts expire on 11/29/2020 @20:59 (Pacific Time). Valid on select products. Discounts as marked for each country. Screenshot is from the US Store

Why Fusion 12?
So many features!

Sale ends on Sunday Nov 29, so hurry before it’s too late!

Shop Now!

Fusion 12.1 Now Shipping

Hot off the heels of macOS 11 Big Sur’s public release, today we’ve shipped an update to Fusion which addresses some compatibility issues, bring performance improvements, and even introduces some new features.

For starters, we’ve enabled Nested / VT-X support for Macs that don’t have ‘VMCS Shadowing’ hardware features. Users can once again deploy ESXi or Windows with VBS or Hyper-V (WSL) enabled, as well as other nested Hypervisors like VMware Workstation for Linux.

Fusion 12.1 Brings Nested Back

ESXi performs best because it’s designed to run great as a VM

We’ve also brought support for Windows 10 20H2, Ubuntu 20.10, RHEL 8.3 and Fedora 33

There’s a new feature: Fusion health check
– When Fusion is running on macOS 10.15, the ‘Pipe Broken’ / ‘cannot connect to /dev/vmmon’ issue can be easily fixed with a click.

We’ve also included the docker-machine-driver-vmware directly, so users no longer need to install it separately to get things like Minikube up and running. Just install Minikube as you normally would (‘brew install minikube’) and start it with ‘minikube start –vm-driver=vmware‘ (this can also be set as a global variable with ‘minikube config set driver vmware

And last but not least, we’ve updated ‘vctl kind’ to now support KIND version 0.9.0

Thanks for all the feedback leading up to this release!
We hope you enjoy!

Direct Download Link

Release Notes

Fusion 12: Now Available!

fusion 12 big hero pic

Team Fusion is proud to announce the general availability of VMware Fusion 12 Pro and VMware Fusion 12 Player!

Quick Links:

Big Sur on Big sur

macOS Big Sur on macOS Big Sur

To recap some of the new features we’ve brought to this release:

  • macOS Big Sur Host and Guest
  • Containers and Kubernetes with ‘vctl‘ and kind
  • DirectX 11 and OpenGL 4.1
  • eGPU Support
  • Monster VMs – 128GB RAM, 32 vCPU, 8TB Disks, 10 NICs, 8GB Graphics Memory
  • Latest OS Support
  • USB 3.1 Device support
  • Sandboxed Graphics Rendering Engine
  • Improved Accessibility features (Section 508)

And of course, Fusion Player, available Free for Personal Use!

vctl kind image

Kubernetes with vctl and Kind

vctl documentation

Release Notes

Announcing: VMware Fusion 12 and Workstation 16

Announcing: VMware Fusion 12 and Workstation 16

a list of the 4 VMware products: Fusion and Workstation Pro and Player

It is our pleasure and privilege to announce the upcoming VMware Desktop Hypervisor product lines for 2020: VMware Fusion 12, and VMware Workstation 16.

(Want just the Workstation angle? Check out the Workstation 16 Blog announcement!)

Quick Overview

Building on over 20 years of local virtualization excellence, the latest releases of our favorite virtualization tools deliver some amazing new capabilities for IT Admins, for Developers, and for everyone else.

Earlier this year we introduced ‘vctl’ to push, pull, build and run OCI containers using Fusion’s award winning hypervisor stack, but now developers with Fusion or Workstation can use ‘vctl’ to  deploy Kubernetes clusters with newly added support for ‘kind’ – a tool for creating developer-defined local clusters using containers as “nodes.”

The aim is to provide developers a reliable setup for establishing a rapid ‘inner loop’ pipeline of development to build modern applications, or even when working on the codebase of Kubernetes itself.


(Fusion 12 and Workstation 16 will be available in VMware’s Fiscal Q3 which ends October 30th. Don’t want to wait? We’re still collecting feedback from our Fusion and Workstation Tech Previews!)

There are a lot of features and changes to talk about, so let’s dive right in!


Say hello to Fusion Player

fusion 12 player graphic

I’m excited to announce: VMware Fusion 12 Player.

And, in alignment with Workstation Player, Fusion will be available with a Free for Personal Use license!

Fusion 12 Player replaces Fusion 11.5 ‘standard’, and follows the same pricing and licensing model as Workstation Player, meaning that it is both free for Personal Use, but requires a license for Commercial Use.  Fusion Player has the same features as Fusion 11.5.x ‘standard’ and more.

So if you’re a home user who switched to Mac but want to use Windows for things like DX11 games or other personal apps, you can do so, for free, with a Personal Use License.

Fusion 12 Player also comes with our developer-centric container runtime and CLI, vctl, including the new capability to deploy Kubernetes clusters with KIND.

And by popular demand from our customers with larger footprints, new commercial licenses for Fusion Player are now available for purchase from our Channel Partners with either 1 or 3 years of Support and Subscription, even if the order quantity is only 1. Customers at store.vmware.com can now optionally add SnS in increments of 1 unit as well.

SnS provides major-version upgrades and active support engagement for the duration of the term purchased, at a fraction of the cost of a new license or per-incident support agreement. In alignment with other VMware perpetually licensed products, SnS is required for orders placed through our channel partners, but remains optional for customers purchasing from store.vmware.com.


New Pricing for Fusion Pro on up to 3 devices: Even PCs

The Workstation and Fusion products have undergone some big changes this year.  Fusion 12 Pro now supports individual use on up to 3 devices which now include Windows or Linux PCs running Workstation Pro.  Yes, your Fusion 12 Pro key will unlock Workstation 16 Pro on Windows or Linux. Upgrades for existing customers are reduced to $99, and both upgrades and new licenses ($199) now give users the ability to use their license on up to 3 personal devices running either Fusion Pro for Mac, Workstation Pro for Windows or Workstation Pro for Linux.


Fusion Workstation Pricing Chart

Updated Pricing for 2020

Technology Guarantee Program

We’ve also made changes to our Technology Guarantee Program. Users who bought Fusion 11.5 or Fusion 11.5 Pro after June 15th 2020 (roughly when Big Sur was announced) will be automatically given new license keys for Fusion 12 Player or Fusion 12 Pro, respectively, in their MyVMware portal. Eligible users will be emailed once the automatic license upgrade happens to their account.

Because Fusion Player is now free for personal use, Fusion 11.5 customers entitled for an upgrade under the TGP will provided a Commercial Use license.


fusion 12 big hero pic

New Features

Okay, it’s time to look at some new features, and let’s begin with what’s in Fusion 12 Pro and Fusion 12 Player!

macOS Big Sur Support

We’ve firstly made some big changes to get ready for the next major version of macOS 11.0 Big Sur, for both Hosts and Guests. With big changes happening at the deepest layers of the Mac Operating System, we’ve rearchitected our stack to take full advantage of Apple’s hypervisor APIs so that we no longer need kernel extensions to run Fusion on Mac making it more secure and ready for the future of macOS.

Fusion 12 will fully support macOS Catalina at launch, and is ready to support macOS Big Sur once it’s made generally available. On Catalina, it runs the same way it always has: with our kernel extensions. On Big Sur, it will run VMs, Containers and Kubernetes clusters by using Apple’s APIs.

Containers and Kubernetes

For developers, we’ve added new features to our container engine CLI, vctl, while also making it available on Workstation for Windows.

‘vctl’ can now perform ‘vctl login’ to persistently log into remote container registries without having to type the full URL path every time you want to pull an image.

vctl also brings with it a new feature to deploy Kubernetes clusters with support for `kind`.  vctl can expose a ‘docker compatible’ socket for kind to connect to without modification to `kind` itself.

DirectX 11 and OpenGL 4.1

Fusion and Workstation now both support running games and apps with Direct3D version 11, otherwise known as ‘DirectX 11’, or OpenGL 4.1. Users can now allocate up to 8GB of vRAM to your 3D accelerated guest to maximize gaming and 3D app performance. (vms must be configured for 16GB of RAM or more to unlock the 8GB vRAM option.)

eGPU Support

Fusion 12 Player and Fusion 12 Pro also now support eGPU devices. With eGPU, Fusion offloads the resource-taxing graphics rendering process from the internal integrated or discrete GPU, to a much more powerful one running in a supported external enclosure.

Install from Recovery Partition using APFS

We’ve also added APFS support for installing macOS from the Recovery Partition, making it easier than ever to install macOS guests.

vSphere 7 Compatibility

Fusion and Workstation have been updated to support connections to vSphere 7 through ESXi and vCenter for remote VM operation and configuration, as well as providing workload mobility / compatibility between Desktops and Data Centers.

Sandboxed Graphics Rendering Engine

Fusion and Workstation both offer a new security enhancement feature: Sandbox Renderer. The SBR runs the virtual graphics engine in a separate thread with reduced privilege, making Fusion and Workstation more secure out-of-the-box without sacrificing performance or quality.

Improved Accessibility

We believe in making computing as inclusive as possible for everyone. To that end, we’ve improved our compliance with VPAT Section 508 to help users of all kinds get the full benefits of using virtual machines.

USB 3.1 Support + Performance & Bugfixes

In this release we’ve also added support for USB 3.1 virtual devices, allowing for USB 3.1 hardware devices to be passed into virtual machines with full driver support.


Can’t wait to ship!

Delivering Fusion 12 and Workstation 16 is the result of a collaboration between many different teams across all of VMware, so we want to give thanks not only to our rockstar engineers, but to all the customers and Tech Preview users who have given us such great feedback in contribution to this incredible release.

We can’t wait to ship!


If you want to have an early look at some of these features, we’re still collecting feedback from our Fusion and Workstation Tech Previews.

Ready for Testing: Updated Tech Preview with Big Sur Support

At WWDC 2020, the good folks at Apple wow’d us with a look at the next major version of macOS: 11.0 Big Sur, and it’s no stretch for us to say: We’re pleasantly impressed.

We’ve been working to update Fusion with support for the new rendition of macOS, and today we’re pleased to share with you some early progress with the introduction of a new Tech Preview.

Direct Link to .dmg (no login required)

Download Group (you must be logged in to MyVMware to access this page)

Big Changes

Big Sur brings with it some really big visual changes, but also major changes under the hood. For instance, Apple has been progressively deprecating 3rd party Kernel Extensions or “kexts” which Fusion needs to run VMs and containers. In order to continue to operate in this model, we’ve re-architected our hypervisor stack to leverage Apple’s native hypervisor APIs, allowing us to run VMs without any kernel extensions. 

On macOS Catalina systems, Fusion operates as it always has using kernel extensions to provide functionality. However on Big Sur systems, Fusion operates entirely without kexts.

This Tech Preview is the first release of us operating in this new mode and we’re eager to hear your feedback.

This Tech Preview supports macOS Big Sur 11.0 Beta 2 for both Host and Guest. For example, you can run Big Sur VMs on macOS Catalina, as well as on Big Sur hosts.

What else is in this preview?

Building on the last previews, this TP includes DX11 and OpenGL 4.1 support, as well as eGPU support for improved graphics performance. For example, you can render DX11 graphics for Windows VMs on the built-in display for a MacBook Air using an eGPU housing a Radeon 5700. The performance gains vs. a discrete mobile GPU are pretty significant! One might even say ‘YUUUGE’. (In order to use your eGPU, you must select ‘Prefer eGPU’ from the Virtual Machine > Settings > Display window. This is a per-VM feature.)

We’re also deprecating macOS 10.14 Mojave hosts, starting with this tech preview. Fusion 11.5.x will be the last version of Fusion which supports 10.14, whereas this year’s major release will support 10.15 and 11.0.

How to provide feedback?

We would love to hear from you in our Fusion Beta community: https://communities.vmware.com/community/vmtn/beta/fusion-pro

Known Issues

With such big changes under the hood, there are of course some known issues that we’re working on, both with our code as well as filing issues with Apple directly.

  • Nested VMs are not currently supported.
  • Jumbo Frames feature currently does not function
  • When the installation of a macOS Big Sur guest completes, the virtual disk containing the temporary installer image is not automatically deleted.
    • Workaround: Manually delete the disk once installation is complete
  • Big Sur guests may log out unexpectedly and/or display a black screen when clicking an invisible icon in the upper right corner of the display.
    • Workaround: There is no workaround at this time, we are continuing to investigate
  • A powered-on VM snapshotted or suspended with Fusion running on a macOS 10.15.x or earlier host might fail to resume on a macOS Big Sur host.
    • Worksaround: Power Off your VMs before upgrading your host to Big Sur to avoid VM corruption. As always, employ backups when testing beta software!
  • A maximum of 31 vCPUs are available when running on the current seed of macOS 11 Big Sur. Configuring 32 vCPUs will prompt an error message to reduce the number of cores to 31 or less. (this is temporary and not related to licensing)
    • Workaround: Use 31 or fewer vCPU cores
  • REST API is now only available to local connections.
  • VMs that have side channel mitigations enabled while running on Fusion on macOS 11 Big Sur may have reduced performance. This setting is enabled by default.
    • Side channel attacks allow unauthorized read access by malicious processes or virtual machines to the contents of protected kernel or host memory. CPU vendors have introduced a number of features to protect data against this class of attacks such as indirect branch prediction barriers, single thread indirect branch predictor mode, indirect branch restricted speculation mode and L1 data cache flushing. While these features are effective at preventing side channel attacks they can cause noticeable performance degradation in some cases.
    • Workaround: If your security situation allows, you may regain some performance by disabling side channel mitigations. in your VM Settings > Advanced window.

VMware Fusion 11.5: Now With Container Support

Fusion 11.5.5 Available Now

tldr; Fusion Supports Containers! Download the bits below!

Today is a big day for us on the Desktop Hypervisor team. Our beloved products Fusion and Workstation are getting some pretty significant updates for no extra cost to existing users.

We have a lot to share about our commitments to Developers, to Community, and to Windows , so let’s dive right in!

Our Commitment to Developers

VMware has long served developers, as well as end users and IT professionals, with some of the best in class features with our award winning desktop hypervisor products, VMware Fusion and Workstation.

However, when it comes to developing and testing today’s modern applications, things look a little different than the traditional ones which Fusion was originally designed to support.

Today, we’re proud to express our commitment to today’s modern developers by delivering new support for OCI containers using our award-winning hypervisor technology stack. Fusion 11.5 users can now pull, build, run and push containers as part of a modern development and testing workflow, without needing other tools such as docker desktop installed.

Enter vctl

To support these new workflows, we created a new CLI tool: vctl, and we’re shipping it today as a part of our Fusion 11.5.5 update.

overview of vctl commands

vctl: “vee-kettle” or “vee-control”?  The debate continues…

vctl is designed to locally manage containers and our containerd based runtime. We use vctl to pull and run images from remote container repositories like Harbor or Docker Hub, or to build custom container images using standard Dockerfiles.

Some folks may recall that we first introduced vctl as part of Project Nautilus during our Tech Preview a few months back. Since then, we’ve listened to the community, made some changes, added some new capabilities, and are ready to bring it to the world as part of a free update to your existing copy of Fusion 11.5 or Fusion 11.5 Pro. (yes, both!)

For starters, there’s a new syntax. If you’re familiar with docker, you’ll feel right at home. So things like ‘vctl run nginx’ works the same as ‘docker run nginx’.

We think users will also be happy to hear that we’ve also added build support, so you can build images from standard Dockerfiles.

Let’s take a quick look at how we can get started with vctl!

A familiar workflow

With the vctl cli experience we wanted to focus on some of the most common tasks users perform with containers, and bring that to our unique container engine to bring folks something radically new.

> vctl pull nginx  INFO Pulling from index.docker.io/library/nginx:latest  ───                                                                                ──────   ────────  REF                                                                                STATUS   PROGRESS  ───                                                                                ──────   ────────  index-sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097      Done     100% (1412/1412)  manifest-sha256:8269a7352a7dad1f8b3dc83284f195bac72027dd50279422d363d49311ab7d9b   Done     100% (948/948)  layer-sha256:11fa52a0fdc084d7fc3bbcb774389fd37b148ee98e7829cea4af189735acf848      Done     100% (203/203)  layer-sha256:afb6ec6fdc1c3ba04f7a56db32c5ff5ff38962dc4cd0ffdef5beaa0ce2eb77e2      Done     100% (27098756/27098756)  config-sha256:9beeba249f3ee158d3e495a6ac25c5667ae2de8a43ac2a8bfd2bf687a58c06c9     Done     100% (6670/6670)  layer-sha256:b90c53a0b69244e37b3f8672579fc3dec13293eeb574fa0fdddf02da1e192fd6      Done     100% (23922586/23922586)  INFO Unpacking nginx:latest...  INFO done

Pulling images is familiar, and defaults to docker hub for simplicity, but you can specify a full path to another repo or registry.

To run an image, it should once again feel familiar:

> vctl run --name=myNginx -t -d nginx  INFO container myNginx started and detached from current session

Same goes for showing the container inventory:

> vctl ps -a  ────      ─────          ───────                ──             ─────   ──────    ─────────────  NAME      IMAGE          COMMAND                IP             PORTS   STATUS    CREATION TIME  ────      ─────          ───────                ──             ─────   ──────    ─────────────  myNginx   nginx:latest   nginx -g daemon off;   n/a     running   2020-05-28T12:21:46-07:00    > vctl images  ────            ─────────────               ────  NAME            CREATION TIME               SIZE  ────            ─────────────               ────  nginx:latest    2020-05-28T12:21:13-07:00   48.7 MiB  photon:latest   2020-05-27T19:40:03-07:00   14.5 MiB


It’s fairly light on resources, and you also have the control to assign more or less resources to the appliance when firing up the container using -c and -m (CPU and Memory) flags. Run vctl run with no arguments to see some examples.


Activity Monitor showing the vmware-vmx process

Fusion is fairly light on resources consumption for the container appliance

When a container is fired up, we also mount the rootfs up to Host, meaning you can use Finder to browse the container contents!  You could open up the running code of your app, make changes in real-time, in a way that feels just like editing any other file on your Mac.

container storage volumes

When you start vctl, you’ll see the ‘Fusion Container Storage’ mount… each container also gets mounted as they’re started, and unmounted when they stop

Look! Folders and Files from a Linux filesystem!

Folders and Files from the Linux filesystem in the vanilla nginx container image. You could edit this file directly in Visual Studio Code on the Mac for example.

Let’s check out some of the details:

> vctl describe myNginx  Name:                       myNginx  Status:                     running  Command:                    nginx -g daemon off;  Container rootfs in host:   /Users/mike/.vctl/storage/containerd/state/io.containerd.runtime.v2.task/vctl/myNginx/rootfs  IP address:         Creation time:              2020-05-28T12:21:46-07:00  Image name:                 nginx:latest  Image size:                 48.7 MiB  Host virtual machine:       /Users/mike/.vctl/.r/vms/myNginx/myNginx.vmx  Container rootfs in VM:     /.containers/myNginx  Access in host VM:          vctl execvm --sh -c myNginx  Exec in host VM:            vctl execvm -c myNginx /bin/ls

Using the describe command on a running container, we can see more detail, as well as the execvm commands that we can copy and paste to ‘shell’ into our appliance OS or run some process (/bin/ls in the example above…). (This of course is in addition to being able to vctl exec into the container process itself…)

Getting Started

Getting started is as easy as updating to Fusion 11.5.5, opening up your favorite Terminal app, and running vctl system start.

Once the daemon is running, you can try pulling or building and running container images!

We have a good deal of documentation and examples located on our GitHub page, as well as our normal documentation centers.

If you hit a bug with your container or dockerfile, let us know so we can support it by filing an issue!

Which leads us to our next commitment: Community.

Our Commitment to Community

Releasing great software is fun, but what’s also exciting is the community around it.

VMware has a long standing and well-established community, but it tends to be centered around the “VI Admin” through VMware Technology Network, our community forum, and we welcome users to share their experiences with us there.

However, when it comes to the more development related discussions, the VMTN community forum might not always have the right experts readily available.

So we’ve expanded in a few new ways:

In collaboration with the VMware {code} team, we’ve created a new Slack channel: #fusion-workstation, within the VMware {code} Slack community!

Joining VMware {code} is free, and once you’ve joined you’ll get an email with a link to the main Slack channel.

The VMware Code community is full of folks that go beyond the typical duties of the VI Admin, and our Fusion and Workstation product and engineering teams will be directly participating to help answer tough questions.  Editor:  We’ll probably share some memes too. 

We will also be doing live and recorded events with both the VMware {code} team, community, and on our own in an effort to help folks get the most out of Fusion and Workstation. Follow our Fusion and Workstation Twitter accounts to get the latest details!

As mentioned earlier in this post, we also have our Github repository where we have example docs, how-to content as well as detailed descriptions of the vctl subcommands. We encourage users to let us know if they hit any snags with vctl by filing an issue so we can work to make sure every OCI container runs without a hitch!

And finally,

Our Commitment to Windows Users

The days of “you must disable Hyper-V to use Workstation” are over!

After years of collaborative development and engineering between VMware and Microsoft, we’re proud to be delivering a compatibility story where Workstation 15.5.5 and newer can run on Windows 10 Hosts with Hyper-V mode enabled.

Hyper-V mode is required for security features like Device Guard and Credential Guard, as well as developer features like WSL, and previously rendered Workstation completely inoperable.

For more detail, check out our Workstation Blog


Wrapping it all up

So that was a lot… Our commitment to Developers with a new container runtime for Mac and supporting WSL on Windows, to Community with many new ways to engage and a content calendar put together to help folks get the most from Fusion and Workstation, and to Windows with our multi-year collaborative effort to support Windows 10 Hyper-V mode with Workstation.

DirectX 11 Now in Testing with VMware Fusion Tech Preview 20H2

The VMware Fusion and Workstation team is excited to announce the release of our 20H2 Technology Preview featuring the first drop of our DirectX 11 support!


Quick links to the bits:

Fusion Pro for Mac


What‘s New with the Fusion 20H2 Tech Preview

105FPS on a DX11 Benchmark is kind of nice!

Benchmark ran with Radeon 5500M with 4GB of video ram assigned to the VM, Window was 2560×1440 on a 4K external display

DirectX 11 Support

  • Provides support for DirectX 11 (Direct3D v11) and OpenGL 4.1 graphics capabilities in the guest operating systems! Obviously DX11 is Windows guest only, but OpenGL 4.1 applies to Linux guests as well.
  • Hundreds of new games and applications can now run in Fusion and Workstation!

Increased Hardware Maximums: MONSTER VMS

  • Both Fusion and Workstation Tech Preview 20H2 support up to 32 processors and up to 128GB of RAM per virtual machine, as well as 4GB of shared graphics memory

Sandboxed Graphics Processes

  • We’ve dramatically enhanced virtual machine security by using a special non-root “sandbox” process for rendering 3d hardware assisted graphics. This further isolates the Guest VM operations from the Host, significantly reducing the viability of privilege escalation to the host.

Improved External GPU support

To get started with DX11, VMware Tools needs to be upgraded, and the Virtual Hardware Compatibility version must be set to v18. Existing VMs can be upgraded by adjusting the virtual hardware compatibility while the VM is powered off.  After power-on, you can then upgrade VMware tools as you normally would. With new VMs you may need to manually set the virtual hardware version to v18 before installing, so double check.

Committed to our users, we’ve been working hard on this feature for many years, and so we welcome your feedback!

Let us know your experience! Does your favourite game work? Glitchy? Looks perfect? Help us improve by sharing in our Fusion Tech Preview Community Forums or our Workstation Tech Preview Community Forum


Fighting the COVID-19 Coronavirus with VMware Fusion and Folding At Home


>> Quick Link to the OVA Appliance

What a time to be alive.

I’m writing this from my apartment in San Francisco where I’ve been sheltering in place for almost 2 weeks now.

Personally I had been wondering just how I could help, beyond just applying the rules of today… social distancing, not panic-buying, keeping in touch with friends and family with Zoom and FaceTime, trying to limit time spend on Facebook (okay that last one I’m having a hard time with, but still…).

All that stuff is good, but surely there has to be more to do without putting anyone at risk, right?

Well thankfully I’m not the only one thinking that.

My friends and colleagues William Lam and Amanda Blevins, along with the support of the VMware community, have taken the onus to put together a free virtual appliance that can contribute your spare CPU cycles to the Folding At Home project.

More details: Link: A Force For Good: VMware Appliance for Folding at Home

Basically, the appliance creates a virtual machine that’s all set up with what it needs to start crunching numbers to aid research into the COVID-19 Coronavirus.

The first release did not support deployment on Fusion and Workstation due to some inconsistencies in the OVA profile, but we’ve worked to address that in today’s 1.0.1 release.

So let’s look at how to download and get to crunching numbers with it on Fusion.


Downloading the appliance is easy. Just go to the link below and click ‘download’.

Link: Folding At Home OVA

The download is about 250MG and the VM it creates ends up being about 750MB, so it’s a pretty small appliance.

Once the download starts, click the drop down and change the download item to grab the FAQ and the Deployment Steps PDF files.

Once downloaded, you’ll need to ‘import’ the OVA.

You can do this from the File menu, or by just double-clicking the downloaded .ova package.

The import process creates a copy of the appliance as a virtual machine.


The installation process goes like this:

  1. Download the OVA
  2. File > Import…
  3. Select the .ova file you just downloaded
  4. Click ‘Continue’ to bring up the configuration window
  5. Configure the appliance as follows:
    • Networking
      • (Optional) Set the Hostname
      • Leave IP and other settings as they are
    • Proxy Settings (Optional)
      • Only configure this if your host requires a Proxy
    • OS Credentials
      • Provide a root password (VMware1! is the default)
    • Folding At Home (F@H) Settings
      • You can leave these as they are, or configure as needed (it won’t prevent installation, and you can easily re-deploy if you want to change something)
      • Note: Fusion and Workstation do not unfortunately support the ‘GPU’ mode, so you’ll have to leave that unchecked
      • The OVA Properties are already configured to add your compute cycles to Link: TeamVMware (ID is 52737, you can check out our stats here: stats.foldingathome.org/team/52737 )
      • The default Folding profile is set to ‘medium’ which won’t try to take every last drop of CPU, making it a good option if you’re using the system while folding. Otherwise, if it’s a spare rig, bump that to “Full” to be more aggressive.
      • The F@H Remote Management console has a default password set of VMware1!, but you may change it if you wish before deploying.
  6. Click ‘Continue’
  7. Provide a file path to save the VM to and click ‘Save’
    • At this point you may want to configure some of the CPU and RAM settings, but if you click ‘Cancel’ at this stage it will trash the newly created VM.
  8. Click ‘Finish’

At this point, the VM automatically starts up.

What I do here is quickly ‘Power Off’ the VM so that I can assign more CPU cores and RAM.

  • Go to the Virtual Machine menu and select ‘Shut Down’, (or hold ‘Option’ and click ‘Power Off’ to really pull the plug…)
  • Open the VM settings and add more CPU cores and RAM. Default is 2 cores and 1GB of RAM.
    • How many cores you want to assign depends on what you’re using the system for.If it’s your daily driver you probably don’t want more than half your available CPU cores.If it’s a separate machine that isn’t actively being used, I generally leave 2 cores for the OS and assign the rest to the VM.


After getting your settings right, it’s time to power on the VM for real.

It should run a few maintenance tasks, and then present you with a prompt.


Once it’s powered on, you can SSH into it to control it.

There are more details and context available in the FAQ guide posted on the appliance download page


Personally I had a few issues deploying it…


  • Bridged networking didn’t work for me
    • I had to use NAT, but that didn’t change any of the functionality.
      • I switched to NAT while the VM was booting up and hung on ‘scanning for network’. As soon as I swiched to NAT from Bridged (Autodetect), everything started working.
      • If I were managing it remotely, I would need to do some port forwarding in Fusion’s Network Editor for the vmnet it’s on. (details on what ports are needed are in the FAQ and deployment guides)
    • Sometimes it wouldn’t accept my new root password…
      • VMware1! is the default, and that worked anyhow

That’s basically all there is to it. It will sit and wait for Work Units to calculate.

You can check to see if it’s running any jobs or do some troubleshooting: (from either the console window or by SSH’ing into the VM. sshd is started by default.)

  • Check the status of the Service:
/etc/init.d/FAHClient status  
  • You can restart the services with /etc/init.d/FAHClient stop /etc/init.d/FAHClient start Or /etc/init.d/FAHClient restart

You can then view the logs as below

/etc/init.d/FAHClient log -v  


less /var/lib/fahclient/log.txt  

Welcome to the front lines of the war against COVID-19!


Originally posted at: https://mikeroysoft.com/blog/covid-fah/

VMware Fusion Tech Preview 20H1: Introducing Project Nautilus

It’s Tech Preview time, and this year we’re doing things a bit differently. Let’s dive in!

New Decade, New Approach to “Beta”

Here on the Fusion team, we want to get features in the hands of customers faster than ever before, and we want to iterate and refine things with the guidance of our users, and to do so transparently, out in the open, as much as possible.

In that vein, for the Fusion Pro Tech Preview 2020 we’re doing things a bit differently than we have in previous years.

This year, in an ongoing way, we’ll be releasing multiple updates to our Tech Preview branches, similar to how we update things in the main generally available branch.  The first release is available now, and we’re calling it ’20H1′.

What this means is that if you have Tech Preview 20H1 (TP20H1 as we lovingly call it…)  installed, it will get updates throughout the year as we improve the quality of our release.

We’re also moving our documentation and other things over to GitHub. We’ll be continuing to add more to the org and repos there, maintain and curate it, as well as host code and code examples that we are able to open source.

Having our docs etc on GitHub let users provide feedback and file issues filed against both docs as well as the products themselves. We will continue to post updates and encourage discussion in the community forum, while GitHub becomes more of a place where we can refer to the ‘latest source of truth’, and where folks can file (and even track) more ‘official’ bugs.

We encourage folks to file issues on GitHub, as well as fork and make changes to the repos there if you believe there’s a better way or if we’re missing something.

Same as always, the Tech Preview builds are free for use and do not require a purchased license, but they come with no guarantees of support and things might behave unexpectedly. But hey, that’s where the fun is, right?

Okay, let’s talk about features…

Firstly, we did some cool USB work!  We’ve opted into using Apple’s native USB stack, enabling us to remove one of our root-level kernel extensions. Try out your devices and let us know if they have any trouble by filing an issue in this GitHub repo: Fusion GitHub usb-support

In Fusion Tech Preview 20H1, however, our main focus is the initial release of an internal project we’ve been calling ‘Project Nautilus‘. We’ve been working on this for almost 2 years, so I’m extremely pleased to say that it’s finally available to the public to use, for free, as part of TP20H1.


What is Project Nautilus?

Project Nautilus enables Fusion to run OCI compliant containers on the Mac in a different way than folks might be used to. Our initial release can run containers, but as we grow we’re working towards being able to declare full kubernetes clusters on the desktop.

By leveraging innovations we’re making in Project Pacific, and a bevy of incredible open source projects such as runC, containerD, Cri-O, Kubernetes and more, we’re aiming to make containers first-class citizens, in both Fusion and Workstation, right beside virtual machines.

Currently a command-line oriented user-experience, we’ve introduced a new tool for controlling containers and the necessary system services in VMware Fusion and Workstation: vctl.

Containers on the desktop today

Today when you have say, Docker for Mac installed, its services start, it creates a special Linux virtual machine (in one of many ways, including using Fusion), and essentially maps all of the ‘docker’ commands back the kernel running in the linux vm. (remember that docker is just a front-end to containerd, formerly dockerd, which front ends runC, which interfaces into the linux kernel ‘cgroups’ feature for isolating processes [i.e. the ‘container‘ part of the container].)

So that bulky VM sits there running, waiting for your docker commands, and runs all your containers within it.

Each running container becomes a part of the docker private network, and you forward some ports to localhost and expose your service.

In Fusion with Project Nautilus, we’ve taken a different approach.

Nautilus is different

The vision for Nautilus: A single development platform on the desktop that can bring together VMs, Containers and Kubernetes clusters, for building and testing modern applications.

With Nautilus, leveraging what we built for vSphere and Project Pacific, we’ve created a very special, ultra-lightweight virtual machine-like process for isolating the container host kernel from the Host system. We call that process a PodVM or a ‘Native Pod’.

Each Container get’s its own Pod, and each Pod gets its own IP address from a custom VMnet, which can be easily seen when inspecting the container’s details after it launches.

Meaning, we can easily consume running services without have to deal with port forwarding back to localhost.

It also means that while today we deploy the container image in a pod on a custom vmnet, we can conceivably change that to a bridged network… Meaning you could start a container, the pod would would get an IP from the LAN, and you can then immediately share that IP to anyone else on the LAN to consume that service, without port forwarding.

Of course with custom vmnets we can configure port forwarding, and we’ll also be exposing more functionality there as we grow the Nautilus toolkit.

One of our goals is to bring to bear a new model for design much more complex deployments. We see a future where we can define, within a single file, a multi container + VM + kubernetes cluster deployment, allowing users to accelerate their application modernization.

Nautilus Today

Today Nautilus is controlled by ‘vctl’, and that binary is added to your $PATH when Fusion TP 20H1 is installed.

Let’s look at the vctl default output:

mike@OctoBook >_ vctl    vctl - A CLI tool for Project Nautilus Container Engine powered by VMware Fusion    Feature Highlights:   • Native container runtime on macOS.   • Pull and push container images between remote registries & local macOS storage.   • Run containers within purpose-built linux-based virtual machines (CRX VM).   • 1-step shell access into virtual machine debug environment. See 'vctl sh'.   • Guide for quick access to & execution in container-hosting virtual machine available in 'vctl describe'.    USAGE:   vctl COMMAND [options]    COMMANDS:   delete Delete images or containers.   describe Show details of containers.   exec Execute a command within containers or virtual machines.   get List images or containers.   help Help about any command   pull Pull images from remote location.   push Push images to remote location.   run Run containers from images.   sh Shell into container-hosting virtual machines.   start Start containers.   stop Stop containers.   system Manage Nautilus Container Engine.   tag Create tag images that refer to the source ones.   version Prints the version of vctl    Run 'vctl COMMAND --help' for more information on a command.    OPTIONS:   -h, --help help for vctl    

You can see we are off to a good start, there’s a lot we can do already. We also have many aliases in place. Most commonly you’ll have ‘ls’ for ‘get’, ‘i’ fo

As a quick example, to run our first container first we need to start the services.

mike@OctoBook >_ vctl system start  Preparing storage...  Container storage has been prepared successfully under /Users/mike/.nautilus/storage  Preparing container network, you may be prompted to input password for administrative operations...  Password:  Container network has been prepared successfully using vmnet: vmnet12  Launching container runtime...  Container runtime has been started.

Once the system is prepared and started, we can pull an image:

Note that we’re providing a full URL to the image hosted on docker hub, but we could easily point that to a private Harbor instance or some other OCI-compliant registry. In these examples I’m referring to the full path as the image name, but you could ‘tag’ it and just refer to the tag for simplicity’s sake.

mike@OctoBook >_ vctl pull image docker.io/mikeroysoft/mrs-hugo:dev  ─── ────── ────────  REF STATUS PROGRESS  ─── ────── ────────  manifest-sha256:83cd5b529a63b746018d33384b1289f724b10bb624279f444c23a00fd36e3565 Done 100% (951/951)  layer-sha256:c94289816e8009241879a23ec168af2d9189260423f846695538c320c8b99ea7 Done 100% (17575762/17575762)  layer-sha256:9d48c3bd43c520dc2784e868a780e976b207cbf493eaff8c6596eb871cbd9609 Done 100% (2789669/2789669)  layer-sha256:b6dac14ba0a98b1118a92bc36f67413ba09adb2f1bb79a9030ed25329f428c1f Done 100% (5876538/5876538)  config-sha256:cb657649e42335e58df4c02d7753f5c53b6e92837b0486e9ec14f6e8feb69b61 Done 100% (7396/7396)  INFO Unpacking docker.io/mikeroysoft/mrs-hugo:dev...  INFO done

Now that we have the container in our local inventory:

mike@OctoBook >_ vctl ls i  ──── ───────────── ────  NAME CREATION TIME SIZE  ──── ───────────── ────  docker.io/mikeroysoft/mrs-hugo:dev 2020-01-19T17:46:09-08:00 25.0 MiB

Cool, there’s my image (you can see it live at https://mikeroysoft.com!).

Let’s start it up!

mike@OctoBook >_ vctl run container my-www --image=docker.io/mikeroysoft/mrs-hugo:dev -d  INFO container my-www started and detached from current session     mike@OctoBook >_ vctl ls c  ──── ───── ─────── ── ───── ────── ─────────────  NAME IMAGE COMMAND IP PORTS STATUS CREATION TIME  ──── ───── ─────── ── ───── ────── ─────────────  my-www docker.io/mikeroysoft/mrs-hugo:dev nginx -g daemon off; running 2020-01-19T17:58:33-08:00    

You can see that the container ‘my-www’ is running, based on the mrs-hugo:dev image in it’s fully-pathed form.

You can see the command being run, and most interestingly you have an IP address.

Opening that up yields whatever was running in the container. In my case it’s nginx serving up some static content on port 80. No port mapping necessary.

I won’t go into much further detail in this post, but in the coming days and weeks we will be doing a series of posts and additions to the GitHub repository to explore using all of the capabilities we’ve been able to deliver as part of Nautilus.

Nautilus Tomorrow: Let’s get there together

This is only the first iteration, and we’re making great effort to ensure that we can iterate quickly. This means not only listening better and hearing more from our users, but also tracking issues more transparently, and hold ourselves accountable for delivering fixes and improvements in a timely manner.

We see a not-so-distant future where we can define complex multi-vm+container+kubernetes cluster setups locally on the desktop using a standard markup, and to be able to share that quickly and easily with others even if they’re using Windows.

So there you have it… time to go get started!

Direct Download

VMware Fusion on GitHub