The vHIL – The new sibling in the loop family

[redirect url=’’ sec=’0′]
Since the introduction of separable software components and virtual testing, the development of software for mechatronic systems is taking place parallel to the procedure of producing the hardware. The progress has made it possible to shrink the time for development and also gain knowledge, through testing, at an earlier stage of the process.

The process of testing the software includes both virtual testing, as mentioned above, and actual physical testing with the “for-the-purpose” designed hardware. The whole process of testing can be compared to a multistage rocket, where a sequence of “in-the-loop”-tests are pinpointing different areas within the software development.

In this post, we will look into this family of “in-the-loop”-tests and present the different members. Most of all, we will present the latest sibling in the family, the vHIL, and the introduction of virtual processors into the “in-the-loop” environment.

Physical testing

Still, physical testing is important, since the final goal of the software is to control some physical hardware. However, physical testing has its drawback. To perform physical testing, one needs plants, physical test prototypes, which can be both expensive and time-consuming to produce. Because of this, the number of plant objects is highly restricted. Also, they might have to be shared among multiple development teams so that the effective time for testing is greatly limited. Add to this, the plants often are hard configured, which means that to test another set-up; it is necessary to send the plant to the workshop to be modified, a time-consuming and error-prone procedure.

Physical testing is a kind of a gamble since this can be the first time the software, uploaded on the ECU connected to the plant, is in contact with any physical hardware and the software can have one or several bugs that can damage the plant. Since the pressure after test prototypes can be very high, one wants to avoid the risk of damage the plants. Therefore, physical testing is first taking place late in the development process when the software has gone through some security checks; it is here the purpose of “in-the-loop” tests comes into the picture.

MIL – Model in the loop

The whole purpose of the “in-the-loop” process is to have the software as mature as possible when it has its first contact with the physical plant. At the first stage, the control software components, or control models, are tested; one checks the behaviors of the control algorithms against virtual plant model. The purpose of these virtual plant models is to simulate the physical behavior of physical test objects. Often, the software components are limited to control a specific task; which also limits the range of the virtual plant model.

The testing starts by considering single components, unit testing, and as they start to function properly, one starts to consider larger and larger groups of components. With the groups, it is possible to test the interfaces and the communications between the components.

When the tests show that the behavior is satisfying, the control algorithms are implemented. The implementation can either be performed manually by writing C-code or by using code-generating tools, like Simulink.

SIL – Software in the loop

At the software in the loop stage, the implementation of the control algorithms is validated. The compiled code is here simulated against the same virtual model of the plant used in the MIL testing. In this way, it is possible to compare the results from the two stages to control that the implementation has not caused any differences in the behavior of the system.

The problem with the SIL is that the C-code is compiled for the workstation the engineer is sitting at, not for the actual microcontroller. Between these, there are several differences, for example when it comes to precision and different data types. The microcontroller is also much limited when it comes to memory and performance.

PIL – Processor in the loop

The third stage is not that commonly used. The purpose here is to remedy the shortcomings in the SIL environment and compile the code against the microcontroller architecture. Again, the implementation of the control algorithms into compiled C-code is tested and the same setup of testing against virtual plant models, as in the first two “in-the-loop” stages, is used. Except testing the compiled code, it is also possible to check memory allocations and executions times.

HIL – Hardware in the loop

It is first at this stage; the code is leaving the host PC and is uploaded to a real ECU/MC. When leaving the host PC, the control algorithms can no longer be handled as single units or small groups of units. Instead, they are one component of many in software stacks. Here, they share space with other software components, like other control algorithms, but also with other software layers, like OS, drivers, and middleware.

To get the software stack compiled and the components in it to function together as a unit, will be the first task in the procedure to set up an HIL environment. It is not uncommon that it requires multiple iterations through all previous “in-the-loop” stages to have a mature software stack ready for the HIL environment.

The actual HIL environment consists of an ECU/MC connected to the virtual plant model through an HIL simulation box, which speeds up the simulation of the virtual plant model to real-time. The problem is that this environment suffers from some of the drawbacks common to physical testing; expensive setups shared among multiple teams doing integration and testing.

vHIL – Virtual hardware in the loop

Lately, there have appeared fast simulation models of digital ECU/MC hardware on the market, so-called virtual prototypes or VPs. These are executing the same binary software as the target MCU and can be simulated against the same virtual plant models used for the MIL, and SIL testing. This new environment is the vHIL.

It should be stated that the vHIL will not replace the HIL. Instead, it is smoothening the path between the SIL and the HIL. Since it does not require any physical ECU, the vHIL can be up and run early in the process and deliver a more mature software stack to be integrated into the HIL environment, reducing the bring-up time from weeks to days, which gives more time for actual HIL testing.

With the VP, one can achieve better control and visibility compared to testing on a real ECU/MCU. Many of the simulation platforms for the VP give the opportunity to debug, analyze and automate the testing procedure. It is possible to set breakpoints in the software for debugging, which halts the whole simulation. From these breakpoints, one can step through the software row by row and jump in and out of subroutines. The platforms can also give you a visualization of the communication paths in the software and check the code coverage during a test.

With the vHIL, more tests are performed early in the process, since one do not need to wait for a physical prototype of the ECU. Also, the vHIL is easier to scale up and have running at multiple virtual locations at the same time. More testing at an early stage will result in higher quality testing in the HIL environment or the physical testing. Also, it is possible, with the vHIL, to prepare the tests for the upcoming HIL in advance, compose and test them in a virtual environment and have them ready when the prototypes of the physical ECUs appear.

Two examples of VP platforms are the Virtualizer from Synopsys and the Vehicle System Integrator, VSI, from Mentor Graphics.

Solving Ordinary Linear Differential Equations with Random Initial Conditions


Ordinary linear differential equations can be solved as trajectories given some initial conditions. But what if your initial conditions are given as distributions of probability? It turns out that the problem is relatively simple to solve.

Transformation of Random Variables

If we have a random system described as

dot{X}(t) = f(X(t),t) qquad X(t_0) = X_0

we can write this as

X(t) = h(X_0,t)

which is an algebraic transformation of a set of random variables into another representing a one-to-one mapping. Its inverse transform is written as

X_0 = h^{-1}(X,t)

and the joint density function f(x,t) of X(t) is given by

f(x,t) = f_0 left[ x_0 = h^{-1}(x,t) right] left| J right|

where J is the Jacobian

J = left| frac{partial x^T_0}{partial x} right|.

Solving Linear Systems

For a system of differential equations written as

dot{x}(t) = A x(t) + B u(t)

a transfer matrix can be defined

Phi(t,t_0) = e^{A(t-t_0)}

which can be used to write the solution as

x(t) = Phi(t,t_0) x(0) + int_{t_0}^{t} {Phi(t,s) B u(t) ds}.

The inverse formulation of this solution is

x(0) = Phi^{-1}(t,t_0) x(t) - Phi^{-1}(t,t_0) int_{t_0}^{t} {Phi(t,s) B u(t) ds}.

Projectile Trajectory Example

Based on the formulations above we can now move on to a concrete example where a projectile is sent away in a vacuum. The differential equations to describe the motion are

$latex left{ begin{array}{rcl}
dot{p}_{x_1}(t) & = & p_{x_2}(t) \
dot{p}_{x_2}(t) & = & 0 \
dot{p}_{y_1}(t) & = & p_{y_2}(t) \
dot{p}_{y_2}(t) & = & -g

where p_{x_1} and p_{y_1} are cartesian coordinates of the projectile in a two dimensional space while p_{x_2} is the horizontal velocity and p_{y_2} is the vertical velocity. We only have gravity as external force (-g) and no wind resistance which means that the horizontal velocity will not change.

The matrix representation of this system becomes

$latex A = left( begin{array}{cccc}
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1 \
0 & 0 & 0 & 0 \
0 & 0 & 0 & 0


B^T = left( begin{array}{cccc} 0 & 0 & 0 & 1 end{array} right).

The transfer matrix is (matrix exponential, not element-wise exponential)

$latex Phi(t,t_0) = e^{A(t-t_0)} = left( begin{array}{cccc}
1 & 0 & t-t_0 & 0 \
0 & 1 & 0 & t-t_0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
end{array} right)&bg=ffffff$

Calculating the solution of the differential equation gives

x(t) = Phi(t,0) x(0) + int_0^t {Phi(t,s) B u(t) ds}

where u(t) = -g and x^T(0) = left( begin{array}{cccc} 0 & 0 & v_x & v_y end{array} right). The parameters v_x and v_y are initial velocities of the projectile.

The solution becomes

$latex x(t) = left( begin{array}{c}
v_x t \ v_y t – frac{g t^2}{2} \ v_x \ v_y – g t
end{array} right)&bg=ffffff$

and the time when the projectile hits the ground is given by

p_y(t) = v_y t - frac{g t^2}{2} = 0 qquad t > 0


t_{y=0} = 2 frac{v_y}{g}.

A visualization of the trajectory given v_x = 1 and v_y = 2 with gravity g = 9.81 shows an example of the motion of the projectile:


Now, if assume that the initial state x(0) can be described by a joint Gaussian distribution we can use the formula shown earlier to say that

f(x,t) = f_0left[x(0)=h^{-1}(x,t)right] left|Jright| = frac{1}{sqrt{left|2 pi Sigma right|}} e^{-frac{1}{2}(x(0)-mu)^T Sigma^{-1} (x(0)-mu)},

where left| J right| = left| Phi^{-1}(t) right| , mu^T = left( begin{array}{cccc} 0 & 0 & v_x & v_y end{array} right) and

$latex Sigma = left( begin{array}{cccc} 0.00001 & 0 & 0 & 0 \
0 & 0.00001 & 0 & 0 \
0 & 0 & 0.01 & 0 \
0 & 0 & 0 & 0.01 end{array}

which means that we have high confidence in the firing position but less in the initial velocity.

We are only interested in where the projectile lands and we can marginalize the velocities to get:

fleft(p_{x_1},p_{y_1},tright) = int_{-infty}^{infty} int_{-infty}^{infty} f(x,t) dp_{x_2} dp_{y_2}

which when plotted gives


Since we have used the landing time for the deterministic trajectory, we get a spread across the y-axis as well (the ground is located at p_y = 0). We could marginalize the y-direction as well to end up with:


This shows the horizontal distribution of the projectile at the time when the deterministic trajectory of the projectile is expected to hit the ground.


Given a set of ordinary differential equations, it is possible to derive the uncertainty of the states given a probability distribution in the initial conditions. There are two other important cases to look into as well: stochastic input signals and random parameters.