Engineering teams often want to run parametric sweeps or multiple simulations in parallel (for optimization studies, Monte Carlo analyses, etc.). How does licensing handle concurrent runs? The answer differs for Standard vs Enterprise licenses, but both models support some form of license sharing to maximize throughput:
- Standard License Sharing: Starting with Ansys 2022 R1.1, Lumerical introduced the ability for a single Standard solve license to be shared across multiple concurrent jobs, as long as the total resource usage stays within one license’s capacity[optics.ansys.com]. In simpler terms, one Standard license (32 cores) could be split among several simulations if their combined core count ≤ 32. For example, if you run 4 simultaneous simulations, each using 6 cores (total 24 cores), the license manager will smartly allow them all to proceed on one license[optics.ansys.com]. This is extremely useful for sweep studies on one machine – you don’t get penalized for running many small jobs at once. However, if the combined demand exceeds the 32-core limit, additional licenses are needed. For instance, 6 concurrent sims × 6 cores = 36 cores total – that breaks the 32-core threshold, so it would consume 2 Standard licenses (actually in this case, it would end up requiring 6 licenses because license sharing would disable once the limit is exceeded). Similarly for GPU: one Standard license could in theory be shared by multiple small GPU jobs if the total SMs in use ≤ 16. Note that Standard license sharing has some restrictions – it works for local runs and certain OS/launcher configurations (e.g., on Windows you must use Microsoft MPI or local runs, not Intel MPI). But for most straightforward cases (running sweeps on one machine), it’s a big advantage.
- Enterprise License & HPC for Sweeps: Enterprise licenses take a different approach: instead of “free” sharing, they leverage the HPC token mechanism to enable parallel runs without extra full licenses. Essentially, if you have one Enterprise solver license, you can launch multiple concurrent simulations and use HPC tokens to cover the additional load as if it were one big job. The license manager doesn’t necessarily require a second Enterprise license for a second job if you have HPC capacity available. This is sometimes called parametric HPC licensing. For example, suppose you want to run 4 simulations at once, each on 6 cores (24 cores total). With one Enterprise license (4 cores base) and enough HPC tokens for the other 20 cores, you could run all 4 jobs concurrently on that single license. The HPC tokens essentially substitute what would otherwise require multiple solver licenses. Ansys provides a parametric license calculator to help determine HPC needs for a given sweep setup. The end result: Enterprise licensing can be extremely efficient for sweeping lots of cases in parallel, especially when combined with a pool of HPC tokens. You pay for the total compute footprint (via HPC tokens) rather than per-job licenses.
- Multiple Solver Licenses: Of course, you can always scale out by simply having multiple solver licenses of either type, which allows truly independent usage. For instance, if you own 2 Standard FDTD licenses, you can run two heavy simulations (up to 32 cores each) at the same time on different machines. Companies will sometimes purchase a mix – e.g. one Enterprise license (for big HPC runs) and one or two Standard licenses (for smaller everyday runs or for other team members). The optimal mix depends on your workload profile.
- Remote/Distributed Jobs: If you launch simulations on remote servers or an HPC cluster (outside of your local machine), the license checkout works similarly – the solver license is pulled from your license server and HPC tokens are allocated as needed. With Enterprise, you might run a distributed MPI job across multiple nodes: only one Enterprise license is used (checked out on the head node typically) and the total core count across nodes determines how many HPC tokens are needed. With Standard licenses, a distributed job spanning multiple machines will consume one license per 32 cores used across the cluster. For example, a simulation distributed on 4 nodes × 8 cores each (32 cores total) uses 1 Standard license; if it were 4 nodes × 32 cores each (128 cores total), that’s 4 Standard licenses (one per node in this case). In all cases, the machines need access to the central license server (or to Ansys’s cloud license system, which we discuss next).
Takeaway: For batched parametric studies or heavy parallel workloads, Enterprise licensing with HPC offers the most flexibility – you can throw lots of hardware at the problem and only worry about having enough HPC tokens. Standard licenses can share to a point (great for a handful of smaller jobs), but beyond that point you’ll need multiple licenses. Always use the provided License Estimation tool in Lumerical (introduced in 2025 R1.3) which shows how many licenses will be needed for your chosen configuration and sweep settings. This helps avoid surprises by letting you plan license usage before you hit “run”.
On-Premises vs. Cloud: How Deployment Affects Licensing
Another dimension to consider is where you’ll run Lumerical FDTD – on local/on-premises computers or in the cloud – and how the licensing is handled in each scenario. The good news is, Ansys provides flexible options to support both:
- Traditional On-Premise Licensing: In a typical setup, you’ll install the Ansys License Manager on a server within your network, load your license file (containing your purchased GUI, solver, HPC features), and client machines will check out licenses over the network. This works whether the simulations run on a user’s workstation, a dedicated simulation server, or an HPC cluster on-prem. It’s a floating license model. As long as the machine running FDTD can reach the license server (via VPN if needed), it can pull a license. This model is straightforward for on-site hardware. It can also extend to cloud VMs if you configure networking (for example, you could run an AWS EC2 instance that connects back to your company’s license server to pull a license).
- Ansys Shared Web Licensing (Cloud License Server): Starting with Ansys 2024 R1, there’s a new option that simplifies things: Ansys Shared Web Licensing for Lumerical. Instead of setting up your own license server, users can simply log in with their Ansys account credentials and the software will retrieve license entitlements from Ansys over the internet. In other words, Ansys hosts the license service for you (“license in the cloud”), and you authenticate to access it. This is great for cloud deployments or remote users – you avoid complex IT setups. For example, if you launch a virtual machine on Azure or AWS for some heavy simulations, you can enable web licensing, sign in, and immediately access your licenses without a direct network link to a physical license server. Of course, this requires that your licenses are set up in the Ansys cloud portal (your entitlements tied to your account). It’s an increasingly popular approach to ease cloud usage and was introduced to give a more “plug and play” license experience.
- Ansys Access on Azure: As of Ansys 2024 R2, Lumerical FDTD is also available through Ansys Access on Microsoft Azure. This is essentially a pre-configured Azure cloud VM with the full Ansys Lumerical suite and HPC environment ready to go. Companies can opt to use these on-demand VMs for simulation bursts. In such cases, you can either use your existing licenses (via VPN or web licensing as above) or leverage Ansys’s cloud licensing (credits) as we’ll discuss below. The key is that these VMs come with the software installed and optimized for Azure’s CPU/GPU hardware, speeding up the deployment.
- Bring Your Own License (BYOL) vs. Ansys Cloud Licenses: On cloud platforms (like AWS/Azure), you generally have two choices: BYOL – meaning use the licenses you’ve purchased (floating or web) just as you would on-prem – or use Ansys Cloud licensing which is usage-based. BYOL might be suitable if you already have sufficient licenses and just need extra hardware. But if you only need cloud occasionally or at larger scale than your licenses allow, Ansys’ elastic licensing may be more cost-effective.
Next, let’s dive into that elastic, pay-per-use licensing model – often referred to as Ansys Elastic Units or Cloud Credits – and how it applies to Lumerical FDTD.
Elastic “Burst” Licensing: Pay-Per-Use in the Cloud
To accommodate spiky workloads and avoid heavy upfront costs, Ansys offers an Elastic Licensing option – essentially renting simulation capacity by the hour using a credit system. In context of Lumerical, this is branded as Ansys Cloud Burst for Lumerical FDTD (introduced in the 2025 R1.3 release). Here’s how it works and why it’s a game-changer:
- Ansys Cloud Subscription & Credits: Your organization can subscribe to an Ansys Cloud plan, which provides access to a pool of cloud compute credits (sometimes called Ansys Elastic Units). These credits serve as a currency for simulation time on Ansys-managed cloud hardware. When you run a simulation on the cloud via Burst, credits are deducted based on the resources used and duration. For example, running on a single GPU for an hour might cost a certain number of credits, running on four GPUs for an hour costs proportionally more, etc. This is pay-per-use — you only spend credits when you actually run something. If you have a light month with few simulations, you spend very little; during peak demand, you can scale out massively as long as you have the credits available.
- No Local Solver License Consumption: One of the big advantages is that cloud runs on Ansys hardware do not consume your local solver licenses. Instead, the solver capability is provided as part of the cloud service (covered by the credits). You do still need at least a GUI license on your end to access the Lumerical interface and submit jobs. Typically, a single Lumerical Enterprise or FDTD license on your machine is enough to enable the Cloud Burst option in the GUI (or even an “Enterprise Prep/Post” which is essentially a GUI-only license). Once you launch a cloud job, it uses the cloud’s solver pool via credits, not your on-prem solver license. This means you could run dozens of cloud simulations concurrently without buying dozens of licenses – you just draw down more credits.
- Seamless Integration in the UI: Lumerical FDTD’s interface has built-in support for Cloud Burst. You can simply choose “GPU: Burst” or “CPU: Burst” from the Run dropdown, click Run, and the software will prompt you to log in to Ansys Cloud, select a queue, and submit. It feels like sending a batch job to a supercomputer – except it’s all managed for you. The results automatically download back into your local session when done. This ease-of-use lowers the barrier to using HPC: no need to manually set up cloud VMs or install software on them – Ansys Cloud takes care of provisioning the latest GPU/CPU machines, drivers, and so on.
- Bursting to GPUs for Speed: Many users leverage cloud credits to burst to GPU-heavy instances for big speedups. For example, Ansys Cloud offers powerful NVIDIA GPU configurations (like 4× or 8× NVIDIA L40S GPUs on Azure instances) which can solve huge FDTD problems in minutes. In one benchmark, using 8× L40S GPUs in the cloud cut a simulation down to ~6.6 minutes (versus ~62 minutes on a 63-core CPU cluster)blog.ozeninc.comblog.ozeninc.com. With elastic licensing, you could spin up those 8 GPUs on demand. Rapid scalability is a hallmark of cloud bursting – you can “dial up” the number of GPUs or cores for each job without any capital investment in hardwareblog.ozeninc.com.
- Cost Considerations: The pay-per-use model can be very cost-efficient for bursty needs. You avoid buying permanent licenses for peak capacity that sits idle much of the time. Instead, you purchase a bucket of credits and consume them as needed. Ansys and partners have reported that running FDTD on GPUs in the cloud can actually be cheaper per simulation than using large on-prem CPU clusters. One example found that complex FDTD runs cost only a few dollars of credits on GPUs – an order of magnitude less than the equivalent CPU time. Of course, costs depend on your cloud rates and how optimized your usage is, but the key is you pay only for what you use. It’s essentially renting licenses and hardware by the hour.
- When to use Elastic Licensing: This model shines for occasional extremely large jobs, multi-GPU scaling, or unpredictable workloads. If you regularly max out your on-site licenses or hardware, cloud credits let you burst beyond those limits. Many companies use a hybrid approach: maintain some on-prem licenses for day-to-day runs and tap into cloud credits for special heavy runs or deadlines.
- Administration: From a licensing manager perspective, Ansys Cloud credits are managed through the Ansys administrative portal. You can allocate who in your team can use them and monitor usage. It requires an upfront agreement with Ansys (purchasing a pool of credits and a subscription). But once set up, it’s straightforward to use from the Lumerical UI.
In summary, elastic cloud licensing provides ultimate flexibility – essentially infinite licenses and hardware on demand, within the limits of your credit budget. It turns licensing into an operational expense (OpEx) that scales with usage. The combination of Lumerical’s GPU acceleration and Ansys Cloud’s burst licensing is particularly powerful: you can speed up simulations by an order of magnitude and only pay a few credits for the privilege, rather than needing to permanently upgrade your local compute clusterblog.ozeninc.com.
Scaling Across Multiple GPUs and Nodes: What to Know
We’ve touched on multi-GPU scenarios and distributed computing, but let’s summarize how Lumerical FDTD handles multi-GPU and multi-node simulations – and what licensing considerations come with that:
- Multi-GPU Support: As of Ansys 2023 R2, Lumerical FDTD introduced GPU acceleration, and by 2024 R1 it gained the ability to use multiple GPUs for a single simulation (on a single machine)optics.ansys.com. Specifically, the 2024 R1 update enabled FDTD’s “Express” GPU solver mode to distribute a simulation across up to 8 GPUs in one server (Linux only for now)optics.ansys.com. This is great for tackling very large models that need more GPU memory or for simply getting results faster by parallelizing across GPUs. Licensing-wise, if you are using an Enterprise license, you just need enough HPC tokens to cover the total SM count of all GPUs combined. For example, using 4 GPUs that each have 80 SMs = 320 SMs total. One Enterprise license covers 4 SM, so you’d apply HPC for the remaining 316 SM. Under a Standard license model, multi-GPU is more challenging because you’d need multiple Standard licenses to cover all those SMs (at 16 SM per license). In our 4-GPU (320 SM) example, that would be 20 Standard licenses (!). Clearly, multi-GPU runs are a strong case for Enterprise + HPC or for using the Cloud Burst (credits) option.
- Multi-Node (Distributed) Simulations: Lumerical FDTD can also run distributed simulations across multiple nodes (machines) using MPI for CPU simulations, and potentially for GPU with custom setups (though multi-GPU across nodes is not officially supported yet in Express mode). On the CPU side, you might spread a simulation across, say, 2 servers with 16 cores each. If you have Enterprise licensing, it doesn’t matter if those cores are on one machine or two – your one license + HPC tokens cover the total core count. If you have Standard licenses, distributing across nodes simply means the 32-core per-license rule still applies to the total. The example from Lumerical docs: distributing a 128-core job across 4 machines required either 4 Standard licenses (32 cores each) or 1 Enterprise + HPC packs. One nuance: to run a single simulation on multiple hosts, you’ll need a network filesystem and MPI configured, but from a license perspective it’s treated as one job. Ensure each node can reach the license server. If using Ansys Cloud credits, multi-node is handled by the service automatically (you just request a larger instance with more cores or multi-GPU).
- Concurrent Multi-Node Jobs: If you run multiple simulations on different nodes concurrently (like a small cluster running different jobs for different users), then each job will check out licenses as usual. This is where having multiple solver licenses or using HPC parametric sharing (Enterprise) helps, as discussed. For instance, with two Enterprise licenses, you could run two separate multi-node jobs simultaneously, each with HPC tokens to scale cores.
- GPU Cluster Considerations: Currently, the easiest way to utilize multiple GPUs across multiple nodes for FDTD is to run separate simulations on each node (each node using its local GPUs), rather than one simulation spanning nodes. You might do this in a parameter sweep or optimization loop where each job runs on a different GPU node. In this case, each job needs licensing as per normal: e.g., 4 jobs on 4 separate GPU nodes would need either 4 Standard licenses (if each within 16 SM) or 1 Enterprise license with enough HPC tokens to cover all 4 jobs’ combined SM count if using parametric HPC sharing. Alternatively, this is a perfect scenario for Ansys Cloud: spawn 4 cloud GPU jobs and let the credit system handle it.
- Software Limitations: As of early 2025, multi-GPU FDTD is only officially supported on a single machine (8 GPUs max, Linux)optics.ansys.com. There were also improvements in 2024 R2 like support for periodic boundary conditions in GPU mode (previously a limitation) and better multi-GPU configuration controlsoptics.ansys.com. This shows Ansys is actively improving multi-GPU capabilities, which likely means the licensing will adapt as needed (the current token model already supports arbitrary scaling; it’s the solver technology catching up to allow, say, cross-node GPU). Keep an eye on release notes for updates if multi-node GPU solving becomes supported in the future.
Bottom line: multi-GPU and multi-node runs multiply your simulation speed, but you must have the licensing to match. Enterprise licensing (and/or cloud credits) is practically a must for these cases, since Standard licensing doesn’t economically scale to many GPUs or large clusters. The good news is Ansys has designed the Enterprise+HPC model and the Cloud Burst model to accommodate exactly these scenarios – so you can scale up your photonic simulations to as big a platform as you need, when you need it, and pay for just what you use.
Conclusion
Choosing the right licensing model for Ansys Lumerical FDTD is a strategic decision that depends on your workflow. If your simulations are reasonably sized and you run one at a time on a single machine, a Standard license might serve you perfectly. But if you foresee pushing into large multi-core or multi-GPU territory – or you want the freedom to run many cases at once – investing in an Enterprise license with HPC flexibility will pay off in throughput and efficiency. And thanks to Ansys’s cloud offerings, even smaller teams can access massive compute power on demand via elastic licensing, without the burden of managing on-prem HPC hardware.
In this era of GPU-accelerated computing and cloud HPC, Lumerical FDTD’s licensing has evolved to keep pace. Engineers can now get solutions in minutes that used to take days, as long as they have the licenses (or credits) to leverage those technologies. By understanding GUI vs solver seats, Standard vs Enterprise trade-offs, and how HPC tokens or cloud credits can extend your capabilities, you can maximize your ROI on software and hardware.
In summary, match your license model to your needs: use Standard for simplicity and upfront economy, Enterprise for scalability and HPC integration, and don’t hesitate to exploit elastic cloud licensing for those crunch times when you need a simulation boost. With the right combination, you’ll ensure that your licensing never holds back your innovation – allowing you to simulate faster, more efficiently, and with confidence that you’re getting the most out of Ansys Lumerical FDTD.
Oct 22, 2025 9:31:01 AM