Introduction
In modern system design, engineers often need to validate not just one configuration but an entire design space of operating conditions and component values. Manually testing every variation is time-consuming and prone to error. This is why automation in circuit simulation is so critical. Synopsys SaberRD addresses this need with its Experiment Analyzer – a powerful feature that lets you automate repeated tests, waveform captures, measurements, and report generation. In this post, we’ll explore how SaberRD enables automated testing of a mixed-signal circuit using the Experiment Analyzer.
Our case study is a smart Voltage-Controlled Current Source (VCCS) with digital enable logic and over-voltage protection. This example combines analog, digital, and safety features, mirroring real-world mixed-signal challenges (such as automotive sensor drivers or industrial current sources). The circuit includes an analog current source controlled by a digital signal (with proper slew timing) and a feedback mechanism that shuts off current if the load voltage exceeds a threshold. SaberRD’s training guides use this same example to demonstrate mixed-signal simulation with enable/disable functionality, timing delays, and self-protection. We will step through how to set up and automate experiments on this design using SaberRD’s Experiment Analyzer.
Circuit Overview: Smart VCCS Design
Figure1 : Smart Voltage-Controlled Current Source design combines analog control, digital logic (enable), and safety feedback. Key functional blocks are highlighted.
The base circuit under test is a VCCS controlled by digital logic with built-in protection. The system forms a closed loop: sensor input → digital logic → analog current source → load → feedback to protection. Some key components of this mixed-signal design include:
Together, these components create a realistic mixed-signal current source with enable gating and self-protection. Before automating multiple tests on this system, we first verify its nominal behavior with a single simulation run.
Step 1 – Baseline Transient Simulation
We begin by running an initial transient simulation on the VCCS circuit to establish a baseline. In SaberRD, a 100 ms time-domain simulation (with a fine step size on the order of 2 µs for accuracy) is executed under nominal conditions. This baseline run confirms the expected behavior of the circuit:
This transient plot would typically show the output current increasing during enabled periods and dropping whenever the protection threshold is crossed. Verifying these behaviors in the baseline run is critical before moving into automated sweeping – it ensures the model and control logic are functioning as designed in at least one scenario.
Figure2 : Simulation of the Smart Voltage-Controlled Current Source in SaberRD, showing load voltage limited by protection logic (top) and corresponding sensor input with enable control signals (bottom).
Step 2 – Setting Up the Automated Experiment (Parametric Sweep)
With the baseline behavior validated, the next step is to automate a whole series of simulations across different conditions. We use SaberRD’s Experiment Analyzer to create a test campaign (in this example, named Exp_Vary_Loop) that will sweep through various load conditions automatically. The Experiment Analyzer allows us to combine multiple analysis and measurement tasks into one reusable experiment definitin. For our smart VCCS circuit, we set up the following tasks in the experiment:
Figure3: Setting up the automated experiment in SaberRD’s Experiment Analyzer. The interface allows adding a Parametric Sweep loop with nested analysis, measurement, and test tasks.
With these tasks defined, our experiment setup in SaberRD is complete. The transient, measurement, and test tasks are nested under the parametric sweep, meaning they will execute for each resistor value in the sweep. At this point, we save the experiment and get ready to run the automated campaign.
Step 3 – Running the Sweep and Reviewing Results
Figure4: Automated simulation results from the Experiment Analyzer. Top: Load current waveforms over time for each value of R_load(different colored traces). The protection circuit limits the current when the load voltage threshold is exceeded, causing the dips in current. Bottom: A summary plot of the RMS load current versus the load resistance, with a point for each simulation run.
Once the experiment is launched, SaberRD automatically executes the entire sweep. For each R_load value from 3 kΩ to 10 kΩ, the tool runs a 100 ms transient simulation, captures the waveforms, calculates the RMS load current, and evaluates the pass/fail condition – all without user intervention. Thanks to the multi-core setting, multiple simulations run in parallel, dramatically speeding up the overall analysis.
At the end of the experiment, we have a wealth of results at our fingertips. SaberRD generates an Experiment Report logging the resistor value, the measured rms_load for each case, and the pass/fail status of each run. The waveforms for each scenario are available for inspection (as seen in the figure above, where each color trace corresponds to a different load resistor). We can observe, for example, that with lower resistance (higher load current), the over-voltage protection engages more strongly – the current is cut off sooner or more often when the 3.2 V threshold is hit, resulting in a lower RMS current for those cases.
In addition to the individual waveforms, the RMS vs. R_load plot succinctly shows how the performance metric changes across the sweep. This kind of analysis is extremely useful for design verification – it reveals trends such as diminishing current at extreme loads, and helps identify at what point the design fails to meet the rms_load > 1.6 criterion. Indeed, the automated pass/fail tagging quickly highlights which resistor values resulted in a passing condition (e.g., green for those where RMS > 1.6) and which did not (red tags for failing cases). Essentially, SaberRD has turned what would have been dozens of separate simulations into a single, coordinated experiment. The tool acts like a virtual lab bench executing a battery of tests and compiling the results for us.
Figure5: Experiment Analyzer results for the parametric sweep of load resistance. One test case fails the RMS current condition (highlighted red), while the other runs complete successfully (green), showing automated pass/fail validation across design points
The engineer can now easily review this data to make design decisions – for instance, adjusting component values or control logic if too many scenarios failed, then re-run the experiment to verify improvements. This automated workflow not only saves a huge amount of time, but also ensures consistency and completeness in testing the mixed-signal design.
Key Takeaways
Video Walkthrough
For a hands-on demonstration of this case study, check out the accompanying video walkthrough (link below). In the video, we go through the process step by step:
Video link: Smart VCCS Automated Test in SaberRD
Conclusion
Automated testing is rapidly becoming a necessity in mixed-signal design workflows. By embedding parameter sweeps, measurements, and pass/fail checks into a single cohesive flow, SaberRD turns simulation into a virtual lab for your design. In our VCCS example, what would have been a tedious set of manual experiments became an efficient, push-button operation – accelerating design iterations and boosting confidence in the results. This methodology is invaluable in industries like automotive and industrial electronics, where complex analog-digital systems (converters, motor drives, sensor interfaces, etc.) must be thoroughly validated under many conditions before hardware build.
Whether you’re exploring an ADC’s performance across process corners or verifying a motor drive circuit under various loads, SaberRD’s Experiment Analyzer gives you the power to automate those simulation campaigns and catch issues early. The result is a more rigorous design validation process with significantly less manual effort, allowing engineers to focus on design improvements rather than repetitive testing. Embracing such automation not only saves time, but also enhances the reliability and robustness of system-level designs in the long run.