Researchers Use Graphical System Design for Development and Control of Unmanned Underwater Vehicles
Researchers at the University of Limerick use NI software and hardware to design and control the virtual underwater lab.
"The VUL has open architecture, providing a framework for researchers to develop, implement, and test advanced control algorithms in a safe, simulated environment before actual tests are performed in a real-world environment."
Developing a platform for easy integration of survey equipment and advanced control development for unmanned underwater vehicles (UUVs).
Using National Instruments CompactRIO, Compact Vision System, and LabVIEW to create a mixed hardware/software tool to serve as a generic solution for system integration, testing, and implementation of advanced control algorithms for UUVs through rapid control prototyping and hardware-in-the-loop techniques.
Edin Omerdic - Mobile & Marine Robotics Research Centre, the University of Limerick
At the Mobile & Marine Robotics Research Centre (MMRRC) at the University of Limerick, we created a generic solution for underwater development. As part of the current research project (HEA PRTLI III “Deep Ocean Habitat Mapping Using an ROV”), the MMRRC has acquired state-of-the-art survey equipment, including a high-resolution, multibeam sonar RESON SeaBat 7125; an IXSEA photonic inertial navigation system (PHINS); an RDI Doppler velocity log (DVL); a MicroBath depth sensor; six TriTech digital precision altimeters for obstacle avoidance; NI CompactRIO; and NI Compact Vision System.
During research cruises in February and June 2005 with the RV Celtic Explorer, we identified weak points of the overall system integration to address with our solution, a virtual underwater lab (VUL). The VUL is a mixed hardware/software tool designed to overcome identified problems; make overall system integration easier; and provide a framework for researchers to develop, implement, and test advanced control algorithms in a combined real-world/simulated environment.
Because of the complexity of the underwater environment, the vehicle control system had to avoid obstacles and compensate for various external disturbances (sea currents, drag effects of umbilical for remotely operated vehicles (ROVs), and so on). The physical shape and actuator configuration of the vehicle (number, position, and orientation of thrusters and control surfaces) imposed constraints that limited control actions. If we did not effectively design the control system, tracking errors would lead to unnecessary survey mission time delays. In addition, a nonoptimal control allocation of actuators would lead to inefficient usage of available energy resources.
A typical seabed survey mission requires integration of the existing ship equipment (GPS, USBL) with ROV onboard components (multibeam, inertial navigation system, DVL, depth sensor, sound velocity probe, vision system). To obtain the highest-quality navigation data, it is necessary to reconfigure some components for the best performance in real time, depending on the stage of the mission. Integration of the overall system requires significant technical expertise. It is also very expensive to work with this equipment in real conditions because ship time is costly.
Modern ROVs conduct sampling, data acquisition, and high-resolution acoustic and video surveys in deep oceans. Expensive equipment is deployed very close to the seabed, and the ROV pilot has the huge responsibility of controlling the vehicle and preventing any damage or loss of equipment. Developing a set of aiding tools (control algorithms) to help an ROV pilot with moderate skills easily perform complex tasks is critical to potential commercial applications. The pilot must be able to bring the vehicle to the initial position, activate the corresponding aiding tool (algorithm), and monitor the realization of the task.
In an underwater environment, an ROV pilot’s field of view is limited to onboard cameras. The side view of the vehicle and its underwater environment enhances steering. We made this side view possible by preparing a virtual reality (VR) underwater scene with 3D models of the vehicle, ship, and seabed in advance and by forwarding real-time signals from sensors (position and orientation of the ROV and ship) to the VR scene in real time. To use the VUL in a simulated environment, we replaced the real-world components from the ship (GPS1, GAPS) and the ROV (PHINS, external sensors, power lines, and leak detectors) with hardware/software simulators.
We implemented all software (except the sonar simulator) in NI LabVIEW using the LabVIEW State Diagram Toolkit, the LabVIEW Control Design and Simulation Module, and the LabVIEW FPGA Module. We bundled data (outputs of individual components) into clusters and transmitted them using network-published, shared variables based on the NI Publish-Subscribe Protocol (NI-PSP). The NI-PSP uses less network bandwidth and is more efficient than TCP/IP for NI-PSP requirements.
It is important to note that we synchronized all simulators with real time. We implemented full six degrees-of-freedom vessel dynamic models, including thruster DC-motor dynamics with nonlinearities, such as saturation, slew-rate limiter, friction, and nonlinear propeller load. We simulated different components of the ship and ROV simulators as parallel loops executed with different speeds, depending on the dynamics of components.
Task Executor Control
The control system used a hybrid control approach, which combined the advantages of a top-down, traditional (hierarchical), artificial intelligence approach and a bottom-up, behavior-based approach. We implemented two high-level task executors – waypoint tracking and obstacle avoidance. Each of these task executors competed to take control of actuators. We developed the control buffer concept to provide transparency and easy fusion of different task executor control demands. Each task executor had its own control cluster and mask inside the control buffer. The control cluster consisted of hand control unit (HCU) components (to simulate a virtual joystick) and settings for low-level controllers (setpoints and on/off switches to activate/deactivate individual controllers).
Each control cluster was masked with a corresponding mask, depending on the state of the mission and navigation data (interaction with the real world). We built a mask from weights and logic gates, which were bundled into the same structure as the control cluster. We performed masking by multiplying (ANDing) corresponding fields in the control cluster and mask. The mask content determined the priority level of the task executor. In this way, it was possible to control the degree of cooperation and competition between different task executors. After masking, control clusters were bundled into the Winner Control Cluster, which had exclusive actuator control.
We implemented two planners in the VUL – the Mission Planner and Tracking Waypoints Planner. The main task of the Mission Planner, a state machine using the LabVIEW State Diagram Toolkit, was decomposition of the mission into a set of tasks. The control execution layer supervised the activity of the lower-level reactive layer and assessed the situation. Based on external conditions or in-state calculation, the Mission Planner decided which state needed to be executed next.
We performed actual waypoint guidance inside the Tracking Waypoints state. The main task of the Tracking Waypoints Planner, a state machine whose parent is the Tracking Waypoints Super-State, was the decomposition of the waypoint guidance algorithm into a set of tasks. Depending on the predetermined tracking mode (constant depth or constant altitude), the corresponding operation mode of the Auto-Heave low-level controller (depth or altitude) was permanently activated in the Tracking Control Cluster during the waypoint guidance. The Navigation PC ran a real-time, side-scan/multibeam sonar simulator, developed by a member of the research group.
We developed a set of LabVIEW toolkits, including VIs for quaternions, kinematics transformations, geographic data transformations, and special-purpose signal conditioning. Using these toolkits, we also developed a library of Express VIs, including Control Allocation, Joystick Conditioning, PHINS Simulator and Conditioning, and Virtual Reality Interface.
We implemented a hybrid control allocation approach inside the control allocation Express VI. In a fault-free case, optimal control allocation is guaranteed for all possible command inputs. In faulty situations, the fault diagnosis part of the system immediately detects and isolates any fault in a thruster using fault-detection units and delivers fault information in the form of a fault indicator vector. The fault accommodation part of the system uses this vector to accommodate fault and eventually switch off the faulty thruster.
At the same time, we performed control reallocation by redistributing the control energy among the remaining operable thrusters. Slider HT and VT saturation bounds controlled the degree of usage for each thruster. Depending on the state of the thrusters (healthy, partial fault, or total fault), the ROV pilot or fault accommodation system determined the position of these sliders.
Visualization of the thruster velocity saturation bounds implemented as part of the fault diagnosis and accommodation system (FDAS) provided insight into the constraints imposed by a particular thruster configuration. During missions, the ROV pilot/control law generated command inputs, which stretched out over the command space in different directions. Thruster configuration determined the position of saturation bounds inside the command space. To avoid thruster velocity saturation, we did not stretch command inputs outside the saturation bounds. Any fault in a thruster caused a change in the shape of the attainable command set. Using different indicators and visualization tools such as the Thruster Velocity Saturation Indicator and Virtual Control Space Visualization Tool, the FDAS informed the ROV pilot/main controller about the position of actual command inputs relative to the actual attainable command set. Using this information, even an inexperienced ROV pilot can detect when thruster velocity saturation occurs and correct the command inputs such that it becomes attainable.
With the help of National Instruments hardware and software, the MMRRC is successfully developing the VUL generic hardware/software tool for integration of survey equipment with existing ROV and ship resources. We believe that our application shows the hidden potential of LabVIEW and NI hardware. The VUL has open architecture, providing a framework for researchers to develop, implement, and test advanced control algorithms in a safe, simulated environment before actual tests are performed in a real-world environment. The signal-level compatibility between the simulated and real-world environments provides the opportunity for engineers to use rapid control prototyping and hardware-in-the-loop development techniques in system design.
Explore the NI Developer Community
Discover and collaborate on the latest example code and tutorials with a worldwide community of engineers and scientists.
Who is National Instruments?
National Instruments provides a graphical system design platform for test, control, and embedded design applications that is transforming the way engineers and scientists design, prototype, and deploy systems.