Case Study
Ansys si impegna a fare in modo che gli studenti di oggi abbiano successo, fornendogli il software gratuito di simulazione ingegneristica.
Ansys si impegna a fare in modo che gli studenti di oggi abbiano successo, fornendogli il software gratuito di simulazione ingegneristica.
Ansys si impegna a fare in modo che gli studenti di oggi abbiano successo, fornendogli il software gratuito di simulazione ingegneristica.
Per Stati Uniti e Canada
+1 844.462.6797
Case Study
Statistical methods combined with Software-in-the-Loop (SiL) simulation help to analyze the reliability of Advanced Driver Assistance Systems (ADAS).
The validation of Advanced Driver Assistance Systems is performed with a scenario based simulation. Simulation in this context means that the control device, on which the ADAS are running, is present as a simulation tool, running the real ECU code and thus software-in-the-loop simulations are performed. All inputs for the simulated controller are generated by a simulation environment. These include sensors, vehicle data as well as data from other ECU’s installed in the vehicle. In order to generate plausible input data, a virtual environment is simulated in which the system vehicle moves and other road users (objects) are detected by sensor models. Thus, the virtual world is processed and captured, and control quantities calculated therefrom are delivered back to the vehicle model.
The scenarios are derived from the system requirements, from the research project PEGASUS (Joint project to develop new methods for validating and testing ADAS) as well as observations from the field. A logical scenario is typically a specific traffic situation.
For instance, a cut in maneuver of other objects or a jam end situation on a highway as shown in Figure 1. To describe such a logical scenario the 6-Layer model can be used [Bock]. For demonstration purposes, only the road layer (1) and the moving objects layer (4) are used for the description. With the help of the corresponding parameters, these logical scenarios can be varied in their characteristics. Hence it is possible to vary speeds of the vehicles, distances from objects or the dynamics of lane change maneuvers.
These so-called specific scenarios resulting from different parameter combinations are simulated and the system reaction of the ADS is evaluated. This is done through evaluation criteria that reflect the criticality of a specific scenario. For example, the Time-To-Collision (TTC) or the distance between two vehicles can be used as evaluation criteria.
Satisfying design requirements will necessitate ensuring that the scatter of all important responses by fluctuating geometrical, material or environmental variability lies within acceptable design limits. With the help of the robustness analysis this scatter can be estimated. Within this framework, the scatter of a response may be described by its mean value and standard deviation or its safety margin with respect to a specified failure limit. The safety margin can be variance-based (specifying a margin between failure and the mean value) or probability-based (using the prob-ability that the failure limit is exceeded). In Figure 2 this is shown in principle.
Within the reliability method the probability of reaching a failure limit is obtained by an integration of the probability density of the uncertainties in the failure domain.
Sampling, where the sampling density is adapted in order to cover the failure domain sufficiently and to obtain more accurate probability estimates with much less solver calls.
Other methods like the First or Second Order Reliability Method (FORM & SORM) are still more efficient than the sampling methods by approximating the boundary between the safe and the failure domain, the so-called limit state.
In contrast to a global low order approximation of the whole response, the approximation of the limit state around the most probable failure point (MPP) is much more accurate. A good overview of these “classical” methods is given.
In our study we have investigated several methods. One reliable and robust method for our application is the Adaptive Importance Sampling strategy. In this approach an importance sampling density is obtained by iterative adjustment of a modified sampling density.
This method becomes inefficient with increasing number of random variables due to the less accurate estimates of the density statistics. Therefore, it is recommended to apply this method for problems with up to twenty random variables.
Furthermore, it can analyze only one dominant failure region. In our studies, where discrete distribution types have been used together with continuous random variables, we observed an additional numerical effort to obtain a similar accuracy of the failure probability estimates as in pure continuous problems.
This is caused in artificial discontinuities of the limit state function in the standard normal space as shown in Figure 3 (see previous page). Even for continuous limit state functions such discontinuities occur due to the discrete distributions. This phenomenon causes multiple most probable failure points, which makes the normal sampling density less efficient.
On order to overcome the limitation of one dominant failure region we extended the Importance Sampling using Design Points (ISPUD) by a multi-modal density according to [Geyer]. The modified sampling density may consist of an arbitrary number of individual sampling densities with different center points and unit covariance in the Gaussian space. In Figure 4 the sampling is shown for four individual failure regions.
After the most important failure regions have been detected, the corresponding most probable failure points are used as centers for the sampling densities in the multi-modal ISPUD approach. Since the failure probability is not estimated by the beta-distance analogous FORM but by the more accurate Importance Sampling, even non-linear limit state functions can be accurately evaluated. Furthermore, the local optimizer needs not to be very accurate in the estimate of the local most probable failure points.
In this example we investigate the jam end scenario where an ego vehicle including a lead vehicle drive to the end of a traffic jam on a highway. At a certain time, the lead vehicle will change the lane and the ego vehicle has to detect the last vehicle of the jam in order to perform an accident-free braking.
In the simulation software the Time-To-Collision (TTC) is estimated w.r.t. the given input parameters. We consider this TTC as limit state and investigate several limits with the reliability algorithms.
As input scatter we assume nine continuous scattering parameters as lead vehicle and jam end speed, pull out time, lead vehicle braking deceleration as well as a lane offsets of the traffic jam and the lead vehicle. The number of road lanes, the lead vehicle class and the pull out direction have been modeled with discrete random distributions.
In order to perform the analysis and verification more effi ciently, in a first step a global meta-model was created based on 1000 samples. In order to obtain more samples and thus higher accuracy in the relevant regions a local adaptation strategy was used (Adaptive Metamodel of Optimal Prognosis, [Ansys Dynardo, Most]). Based on this fast meta-model we investigated the multimodal and Adaptive Sampling Importance Sampling in comparison to the brute-force Monte Carlo Simulation.
In Figure 5 (see previous page) one subspace of the 12-dimensional meta-model is shown. As indicated in the figure, the lead vehicle speed and the jam end speed are most important in this scenario. Furthermore, the relation of the Time-To-Collision and the input parameters is almost monotonic. Thus, we would expect to obtain different failure regions mainly due to different combinations of the discrete parameters.
In Figure 6 the convergence of the multiple FORM is shown for one specific failure limit. It can be seen, that the optimizer converged to different reliability index values, which correspond to different most probable failure points. Altogether, 20 failure points have been detected which are used as sampling centers for the importance sampling.
As indicated, the multi-modal ISPUD is the most efficient algorithm, especially for small failure probabilities, which is the expected application field. In Figure 7 the importance sampling density is shown for the three most important parameters in the orginal parameter space. Next, the multi-modal and adaptive Importance Sampling are applied using the traffic simulation software directly. The Monte Carlo Simulation could not be applied due to the large numerical effort. In Table 2 the results are compared. Again, the results of both methods agree very well, while the ISPUD approach needs less samples.
Since the FORM method is applied on the meta-model only, all together 1000 samples for the meta-model plus 5000 samples are needed. However, the estimates with the real solver indicate a much larger failure probability as estimated using the meta-model.
We always apply the ISPUD approach using the direct solver. If the most probable failure points are not estimated very accurately, we obtain still valid results since the ISPUD algorithms are running the sampling until a certain accuracy of the estimated failure probability is obtained.
Finally, we investigate the influence of the accuracy of obtained most probable failure points. For this purpose, we use the metamodel again by considering a failure limit of 0.5s for the time-to-collision. We initiate wrong failure points by modifying the limit state in the FORM search while keeping the original one in the ISPUD sampling. In Figure 8 the results are illustrated. It can be seen, that if the density center points are shifted inside the failure regions, the number of unsafe samples increases which would increase the accuracy of the estimated failure probability.
Therefore, less samples are necessary to obtain the required accuracy of 10%. In the other case, when the estimated failure points and thus the center points of the importance sampling densities are located too far in the safe region, the number of samples in the unsafe region decreases and thus the total number of required samples in ISPUD increases. Nevertheless, in all three cases the estimate of the failure probability was quite accurate.
In this paper we have presented an automatic approach for the reliability evaluation of specific traffic scenarios for the validation of Advanced Driver Assistance Systems. In this analysis the control device is represented as a simulation model using software-inthe-loop technology.
Specific inputs of this simulated controller are modeled as random inputs in a stochastic analysis. Based on a definition of a failure criterion well known reliability algorithms could be applied. In our study we have used classical Monte Carlo Simulation only for verification due to its enormous numerical effort to proof small event probabilities. In order to reduce the number of necessary simulation runs, variance reduced importance sampling was applied. For this purpose, we used a multiple design point search approach to detect the important failure regions.
Based on a confident error estimate we could ensure, that the sampling loop was continued until a required accuracy of the probability estimate was obtained. The presented approach enables the automatic reliability proof of an Advanced Driver Assistance System for a specific scenario with minimum manual input. However, one very important point is the quantification of the input uncertainties of the investigated scenario. These assumptions strongly influence the finally estimated failure rate, therefore, attention should be paid in order to derive suitable assumptions about distribution type, scatter and event correlations from real world observations.