Applied Sciences, Vol. 13, Pages 439: Autonomous Manipulator of a Mobile Robot Based on a Vision System

1. Introduction

Recent technological advancements allow robotic systems to be more and more autonomous. Much research in the field of autonomous robots is now focusing on autonomous navigation in specific environments or autonomous positioning to perform specific tasks, for example, to grip an object. Improvements in this area are mainly due to the continued innovation in computer vision. The most common solution is to use fiducial markers to estimate the camera pose within an environment in real time.

Augmented reality (AR) fiducial markers are widely used in robotics as they are suitable for navigation and positioning purposes. The most common are ARTag [1], ARToolkit [2], and ArUco [3]. The major application of marker-based vision systems is to navigate mobile robots. Knowing the position of the markers on the map, a robot is able to locate itself in space. Another approach assumes that markers can indicate the destination the robot is required to reach.The research presented in [4] involved using ARTag markers to navigate a mobile robot. They helped increase the accuracy of GPS data. Another study concerned with the navigation of mobile robots [5] aimed to develop an algorithm for fusing the wheel odometry and IMU data and a vision system capable of detecting ARTags, estimating the distances to them, and correcting the robot’s position determined from the IMU data. The use of marker-based vision systems for robot navigation is also discussed in [6,7,8].Artificial markers can be employed for the navigation and positioning of unmanned aerial vehicles (UAVs). As indicated in [9], the localization of multirotor aerial vehicles used in indoor applications can be based on visual inertial odometry and a map of markers. In the study of the localization of micro air vehicles (MAVs) [10], the researchers propose fusing the data from sensors with those provided by the vision system using ArUco markers. AR markers were also used in research on the autonomous landing of unmanned aerial vehicles at the desired location, including a mobile platform [11,12,13].Marker-based vision systems are also used to assist in manipulation and positioning. As suggested in [14], a marker tracking system can be integrated into microelectromechanical systems (MEMS) and similar devices to provide real-time data on the position of microscale objects. The research on multirobot industrial systems, e.g., [15], shows that ArUco markers can improve the interaction between a fixed (stationary) robot and a mobile robot, ensuring high-accuracy positioning of the manipulator and high-accuracy selection of details from the mobile platform. As indicated in [16], a manipulator equipped with a marker and camera-based system can accurately position the end-effector to grasp an object and move it to the desired position. The vision system proposed in [17] uses markers to control the position and orientation of excavator arms (booms). The paper [18] presents a system for estimating manipulator positions using ArUco tags.

The aim of the study described in this article was to develop an autonomous panel operation system for a mobile robot equipped with a manipulator. In this article, the term robot is used to refer to a platform to which the manipulator is attached. It was assumed that the task performed by the robot would be a selected switch. The position of the manipulator is determined by detecting the positions of the markers located on the panel surface. By determining the locations of the switches on the panel in relation to the markers, the manipulator is able to run appropriate movement sequences to turn them on or off.

The two major subsystems responsible for the manipulator’s performance are the vision and control systems. The vision system first detects the markers and then calculates the position and orientation of the end-effector with respect to the panel. The control system uses the data to move the end-effector to a desired position in the coordinate system so that the task can be performed with the highest accuracy possible. Another system incorporated into the unmanned ground vehicle (UGV) is the autonomous manipulator control system. A vehicle with a total weight of 50 kg was designed, constructed, and programmed by the IMPULS student research group affiliated with the Department of Automation and Robotics of the Kielce University of Technology in Poland. The four-wheeled vehicle is equipped with a specifically developed six-degree-of-freedom manipulator and a wireless communication and control system. The vehicle navigation and manipulator operation are controlled by STM microprocessors programmed, for instance, to solve the inverse kinematics of the manipulator.

The IMPULS robot project involved (a) developing vision system algorithms to determine the position of elements on the operator panel in relation to the end-effector, (b) installing the vision system hardware, i.e., a camera and a minicomputer, (c) analyzing the manipulator positioning accuracy, (d) assessing the marker recognition accuracy, and (e) determining the accuracy of the autonomous positioning of the end-effector relative to the markers.

Section 2 of this article describes the design and control of the manipulator; it also includes the algorithm for the vision system. Section 3 provides the research results, which are discussed in Section 4. The last section outlines the key conclusions drawn, as well as proposals for the further development of the robotic system. The improved manipulator is expected to be able to work alongside humans as a collaborative robot, performing simple tasks such as the preparation of beverages in the catering services sector or performing pick-and-place operations along with object orientation detection in the manufacturing sector. 2. System DescriptionThe autonomous system was used in a mobile robot equipped with a manipulator, shown in Figure 1. 2.1. Manipulator

The manipulator has six degrees of freedom. It is composed of six 5th-class rotary kinematic pairs. As a drive, it uses DC motors with planetary gear with a ratio from 1:30 to 1:70, depending on the axis. For each axis, an additional gear was used in the form of a harmonic drive with a gear ratio of 1:160. This allowed for high positioning accuracy and torque. Incremental encoders with a resolution of 64 pulses per revolution were used to measure the axis rotation. Taking into account the full gear ratio, the obtained values for the respective axis are from 307,200 to 716,800 pulses per revolution.

A kinematic analysis was performed for the manipulator using the Denavit–Hartenberg notation [19]. The base coordinate system is located in the base of the robot. The coordinate system of each axis is defined relative to the coordinate system of the previous axis by a matrix of 4 × 4 dimension. The matrix M shows the general form of the transformation (1). The matrix R is a 3 × 3 dimension rotation matrix, the vector t = [tx, ty, tz]T is a translation column vector, and 0T = [0, 0, 0]. The transformation matrix is created by combining homogeneous transformations: Translations and rotations around the x, y, and z axes. Multiplying the transformation matrices provides the position and orientation of the last system with respect to the base system. The derivation of the manipulator kinematics equations is presented in Appendix A.

The control system consists of a motherboard equipped with an STM32F407VGT6 microcontroller and four dual servo drivers consisting of STM32F407 microcontrollers. Each controller is connected to a dual VNH5019 integrated circuit that controls the power for two DC motors and two incremental encoders. The servo drivers control the angular position with the use of PID regulators, and the motherboard carries out the manipulator kinematics algorithms. Three servo controllers operate the six main drives of the manipulator, while the fourth servo controller operates the gripper drives.

2.2. Vision System

The vision system uses the Logitech HD Pro Webcam C920 digital camera with FullHD resolution (1920 × 1080 pixels) and a field of view of 78 degrees. The algorithm was implemented on the NVIDIA Jetson TX2 microcomputer. The camera is mounted on the gripper. As a result, the position of the gripper is determined by determining the position of the camera in relation to the panel.

The task of the vision system is to determine the position of the end effector in relation to the panel. On this basis, the algorithm determines the value and direction of movement that the manipulator should perform in order to obtain the given position.

ARTag markers [1] were used to estimate the position of the camera in relation to the panel. The test panel with markers is shown in Figure 2. 2.3. Marker Detection AlgorithmThe algorithm and methods presented in [3,14,20] were used to estimate the position. This algorithm allows us to identify markers of various types that have a frame and an internal code of a fixed size, e.g., Aruco or ARTag.The image analysis for marker detection starts with converting the image to grayscale. Then, adaptive thresholding is performed in order to obtain the edges. Next, the algorithm searches for contours [21] and approximates them by means of a polygon [22]. The algorithm searches only for contours that meet the assumptions. The markers are square, so their image will be a convex quadrilateral. The remaining contours are discarded. The perspective view is removed from the image of the contours that remain after filtration so that the code can be read. A marker is considered correct if it has a black frame and a code compatible with the codes written in the program. To check this, thresholding [23] is carried out.For each detected marker, the position and orientation of the camera relative to the system associated with the marker are computed. The method of solving the Perspective-n-Point problem using the Levenberg–Marquardt optimization [24] was used to estimate the position. Upon obtaining a position in the global coordinate system in 3D space for at least four points and the projection coordinates of these points, it is possible to determine the orientation and position of the camera in relation to the global system. For exactly four points, the solution is unambiguous, and in the case of a greater number of points, the result is optimized with the chosen method.

The algorithm was implemented in C++ using the OpenCV and QT libraries on the NVIDIA microcomputer.

Figure 3 shows the image from the camera while positioning the manipulator.In order to identify the internal parameters of the camera, a calibration process was performed for a fixed sharpness value. The camera was calibrated using the Zhang method [25] in Matlab. 2.4. Positioning Based on a Vision SystemThe system was developed with the assumption that the position of the appropriate markers on the panel and the location and type of objects are known. In the panel shown in Figure 3, there are two different markers, and the center of the left marker has the global coordinate system. In this case, the maximum number of points for estimation position is eight, so it increases the accuracy. The operation of the system begins with searching for markers and determining the position of the gripper relative to the coordinate system on the panel, as shown in Figure 4. When the cycle starts, the operator does not interfere with the robot’s work. The starting position of the manipulator is arbitrary, provided that at least one marker in the frame is visible. After determining the position, we calculated how much the robot must move in each axis of the tool coordinate system and how much it should rotate in relation to these axes in order to position itself at a base point in front of the panel. When the intended position is reached, the markers are detected again and the position relative to which the next movements are determined is calculated. Before the robot completes the task, it successively arrives at several predetermined points where the position is calculated. It increases the accuracy because the farther the camera is from the markers, the less accurately the position is computed. At intermediate points, previous errors are eliminated. The last measurement is performed just in front of the object so that the marker is visible. Marker detection only takes place at fixed points while the robot does not move. Thanks to this, the image is of good quality and the movements between the points do not have to have speed limits. The algorithm of marker detection is shown in Algorithm 1. There are several necessary steps starting from image conversion (color version to greyscale version), passing throw marker contour detection (only rectangle shapes are allowed), and ending with checking coded in marker number and numbers database stored in manipulator memory. In the case of determining the manipulator distance from the marker, we used the algorithm and the fact that the marker frame has known dimensions and shape. 2D image points and their 3D correspondences are taken only from the outer edges of the marker frame. We also check the thickness of the marker frame and its code. The positions of the markers located on the panel are known relative to the global coordinate system, so in the PnP algorithm, all detected marker points could be used. Algorithm 1. Marker detection algorithm.Input: images with possible markers
Output: markers with code (number) and manipulator positions

Convert image to greyscale

Detect edges

Detect contours, if contour too small than remove

Approximate contours with polygons

Filter contours, if not 4 sides of equal lengths than remove

Remove perspective projections (only windows with detected marker candidates)

Save all image marker candidates

If marker detected than read marker code else go to line 10

If code in robot database than save and calculate manipulator position else go to line 10

If not all candidate than go to line 8

If no markers are detected at an intermediate point, the robot will move relative to the previously calculated position.

3. Results

To check and evaluate the operation of the system, three tests were carried out. The first examined the execution accuracy of the manipulator, the second explored the accuracy of the vision system in estimating the position of the camera relative to the marker, and the third assessed the manipulator positioning based on the operation of the vision system.

3.1. Manipulator Accuracy TestingThe test consisted of setting the distance by which the manipulator should move for the selected axis and measuring how much it actually shifted. Measurements were made for selected axes. The operation has been checked for various values, including 20, 50, and 100 mm. In total, more than 60 measurements were made. Table 1 shows the measurements for the case of displacements for the z-axis. The basic statistical parameters were calculated on the basis of the measurements. The actual manipulator movement was obtained simply, as we had set some linear movement to the manipulator control system and then measured where the end effector moved. The position value was taken from a precision plunger dial indicator from the RS Pro company. 3.2. Vision System Accuracy TestingThis section shows the measurement of the camera position relative to the marker. The measurements were performed as a function of the distance from the marker and the tilt angle in relation to the selected axis because such changes have a significant impact on the accuracy of the vision system. In this test, the camera was mounted on an IRB 1600 ABB industrial robot because the industrial robot is highly accurate (according to the robot specification, the robot pose accuracy is 0.04 mm) [26]. Its position in relation to the marker system was taken as the reference position. Table 2 and Table 3 show the obtained results, and Figure 5 shows a diagram of the dependence of the error on the distance to the marker. The tests were carried out under good lighting conditions. Artificial laboratory lighting and a 250 W lamp directly illuminating the panel were used. 3.3. Testing the Positioning System on the Basis of the Vision SystemThe last tests concern combining the operation of both systems. A positioning test was carried out for the assumed base position. The manipulator should be in the same position every time. The vision system has a greater impact on errors, and its accuracy varies with the distance and angle of inclination to the panel. Table 4 shows the positions relative to the manipulator’s base coordinate system reached by the end effector and the actual distance from the panel. During these tests, the flexible suspension of the robot was stiffened so that the movement of the entire robot did not affect the position of the manipulator effector. The set base position was 170 mm from the panel. This test allowed us to evaluate the positioning accuracy based on the vision system. The manipulator reached positions with an error of several millimeters, and this is due to the different initial positions from which the vision system made the measurement. As shown by previous tests, the greater the measurement distance and the greater the angle of the camera relative to the marker, the more inaccurate the measurements. The actual distances from the panel were measured using the laser sensor di-soric LAT 51 M 500 IG3-B5.

Another test was performed to determine whether the task to be performed was successful, i.e., switching the switch. In this case, the position was measured after pressing the switch. Moreover, in this case, the reference position was read relative to the manipulator’s base coordinate system.

During the tests, the markers were detected at each point in the sequence. The operation of the system was also checked when it was impossible to calculate the correction. The results are given in Table 5 and Table 6. 4. Discussion

Based on the research on the positioning of the manipulator, it was determined that in more than 90% of cases, the error is less than 1 mm. The standard deviation was 0.617 mm. Half of the errors obtained were below the value of 0.4 mm, and 75% were below the value of 0.6 mm. This means that the positioning of the manipulator works with good accuracy, sufficient for robot applications. The maximum error was 3 mm, and the minimum error was 0 mm.

The vision system tests have shown that the distance measurement is relatively accurate, and the measurement error is, in most cases, less than 1 mm. The relative error is small, in most cases less than 0.5%, and it becomes noticeably bigger for longer distances above 340 mm and is in the range of 0.7–1.5%. Measurements of rotation around the axis of the coordinate system are burdened with greater error, and they can even exceed 3°, which has a significant influence on the operation of the entire manipulator control system. The tests were performed at different distances from the marker and at different angles. For long distances above 340 mm, the error in measuring the distance could be much greater and could even be a few millimeters. The vision system settings allow the detection of a marker from a distance in the range of approximately 200–420 mm for a marker with a side of 50 mm. The error in measuring the distance grows as the angle of inclination relative to the marker increases. In Table 2, the first half of the measurements were made from a position opposite the panel; in this group, the maximum error was 0.97 mm and the minimum error was 0.13 mm. The second group consisted of measurements for an increasing angle. The unfavorable position resulted in a large error of 5.9 mm. For the performed measurements, the standard deviation was 1.69 mm. Half of the measurements had an error below 0.61 mm and 75% of the measurements were below 0.94 mm.In other work, authors obtained similar results of position measurement using the vision system. However, in each study, the research was carried out on a different scale. For example, in the case of navigation, longer distances (several meters) were measured against larger markers (15–20 cm). An example can be seen in a previous study [20], where the authors obtained a relative error usually below 1%, and for long distances, below 2%. In another study [6], tests were carried out for a distance of up to 150 cm and a marker with a size of 100 mm. In a range of up to 1 m, the authors obtained an error close to zero for small distances, which increased to several millimeters with the increasing distance. Another study [14] examined the problem on the micro scale, and the obtained relative error did not exceed 1%. The most similar study was performed in the paper [18], in which the estimation of the manipulator position was studied. The authors obtained an average absolute error for position measurements using a vision system of 1.8 mm, 0.2 mm, and 3.5 mm for the x, y, and z axes, and an average relative error of 1.6%, 0.2%, and 0.2%, respectively. Measurements were performed for a distance of 100–120 cm and a board with ArUco markers (A4 sheet of paper in size).Based on the research on the operation of the combined system, it was observed that the positioning error was greater than it would appear from separate studies of the systems, which had much lower errors. This error is, on average, a few millimeters. It was influenced by the error in the rotation measurement, which was up to several degrees. The system was tested by starting the measurement at various distances ranging from 300 to 400 mm and at different angles to the panel, from a straight view to approximately 45° sweating. For larger camera angles and longer distances, the positioning error was greater. However, the operation of the system ended in success each time, i.e., a correctly switched button, because after reaching the base position in front of the panel, the position was counted from scratch, and corrections were introduced. The results of the successful tests are shown in Table 3. Table 4 shows the results of the failed tests. During the second test, the markers were covered after reaching the base position and the counting of corrections was impossible. This resulted in a big mistake, and the end of the task was unsuccessful. 5. Conclusions

The proposed manipulator positioning system based on the vision system underwent rigorous testing. It is noteworthy that the system was successfully used to perform all the tasks involving autonomous panel operation required for the European Rover Challenge 2021 and 2022 competition, simulating some aspects of missions to Mars.

Equipped with this autonomous positioning system, the manipulator is able to perform such tasks as pressing and releasing switches and buttons or turning knobs on the panel and picking up and placing objects. Because of the insufficient accuracy, the system is not suitable for high-precision operations, e.g., inserting a plug into a port or gripping very small objects.

Incorporating this autonomous system into an unmanned ground vehicle requires that corrections to the position of the manipulator should be made in real time. The shock-absorbing suspension and other elements of the vehicle structure are responsible for changes in the tilt and thus changes in the position of the center of gravity of the moving robotic arm. Failure to introduce corrections in real time may result in failure to perform the required task. The tests that involved locking the suspension (tilt) showed that the manipulator performed the task correctly. There were also tests performed with non-stiffened suspension, which gave correct results.

The resulting vision system error is very small if the manipulator is closer to the panel. In the case of a greater distance (above 340 mm), a larger error is not a problem, because after positioning the manipulator in front of the panel, corrections to the position are calculated in two places, in the base position and just in front of the switch. From the close proximity to the marker, the system calculates the position with high accuracy, which allows for accurate arrival and execution of the task, despite significant initial errors. In addition, the calculation of the correction is necessary due to the movable suspension of the entire robot. The results presented in the paper were obtained under laboratory conditions, with powerful artificial lighting (250 W lamp). However, the system also worked outdoors and indoors under daylight conditions and performed its function correctly.

Further research on the system will focus on increasing the autonomy, which will involve recognizing and localizing markers on the panel by using artificial neural network models to handle unknown panels.

Author Contributions

Conceptualization, A.A.-M., P.A.L., J.Z. and D.S.P.; methodology and software A.A.-M. and J.Z.; validation, K.B. (Krzysztof Borkowski), D.W., S.K., G.B. and K.B. (Kamil Borycki); formal analysis, investigation, and resources: A.A.-M., K.B. (Krzysztof Borkowski), D.S.P., K.B. (Kamil Borycki), G.B. and S.K.; writing—original draft preparation, P.A.L.; writing—review and editing, A.A.-M., P.A.L. and D.S.P.; visualization, A.A.-M., K.B. (Krzysztof Borkowski), K.B. (Kamil Borycki), S.K. and D.W. All authors have read and agreed to the published version of the manuscript.

Funding

Project co-financed Ministry of Education and Science No. SKN/SP/534769/2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix provides details on the solution of the manipulator kinematic analysis.

The transformation matrix takes the general form:It can be decomposed into homogenous transformations of translation and rotation around the respective axes:

Transx,y,z=100x010y001z0001;   Rotx,α=10000cosα−sinα00sinαcosα00001

(A2)

Roty,β=cosβ0sinβ00100−sinβ0cosβ00001;   Rotz,γ=cosγ−sinγ00sinγcosγ0000100001

(A3)

The manipulator diagram is shown in Figure A1 with Denavit–Hartenberg parameters marked. Figure A1. Kinematic diagram of the manipulator [27]. Figure A1. Kinematic diagram of the manipulator [27]. Applsci 13 00439 g0a1 Table A1 shows the values of the D-H parameters; the variables θi, where i = 1, 2, … 6, are the configuration coordinates.

Table A1. Denavit–Hartenberg parameters.

Table A1. Denavit–Hartenberg parameters.

iθi [rad]di [mm]ai [mm]αi [rad]1θ1730−π/22θ2−9840003θ3 −π/200−π/24θ4500.90π/25θ500−π/26θ6190.900The transformation matrices between successive coordinate systems of the manipulator parts are presented below.

M10=Rotzθ1·Transzd1·Transxa1·Rotx−π/2=cosθ10−sinθ1a1cosθ1sinθ10cosθ1a1sinθ10−10d10001

(A4)

M32=Rotzθ3·Transzd3·Transxa3·Rotx−π/2=sinθ30cosθ3a3sinθ3−cosθ30sinθ3-a3cosθ30−10d30001

(A5)

M43=Rotzθ4·Transzd4·Transxa4·Rotxπ/2=cosθ40sinθ4a4cosθ4sinθ40−cosθ4a4sinθ4010d40001

(A6)

M54=Rotzθ5·Transzd5·Transxa5·Rotx−π/2=cosθ50−sinθ5a5cosθ5sinθ50cosθ5a5sinθ50−10d50001

(A7)

M65=Rotzθ6·Transzd6·Transxa6=cosθ60−sinθ6a6cosθ6sinθ60cosθ6a6sinθ6001d60001

(A8)

Multiplying these matrices, we obtain a transformation matrix that defines the position and orientation of the U6 gripper system in relation to the U0 basis system.

M60=M10·M21·M32·M43·M54·M65=nxsxaxpxnysyaypynzszazpz0001

(A9)

Equation (A9) describes the kinematic model of the presented manipulator.

ReferencesFiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 590–596. [Google Scholar]Poupyrev, I.; Kato, H.; Billinghurst, M. ARToolkit User Manual; Version 2.33; Human Interface Technology Lab, University of Washington: Seattle, WA, USA, 2000. [Google Scholar]Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]Strączyński, P.; Kazala, R. Increasing accuracy of the mobile robot positioning system by using ARTags. In Proceedings of the 24th International Conference Engineering Mechanics, Svratka, Czech Republic, 14–17 May 2018. [Google Scholar]Zwierzchowski, J.; Pietrala, D.; Napieralski, J.; Napieralski, A. A Mobile Robot Position Adjustment as a Fusion of Vision System and Wheels Odometry in Autonomous Track Driving. Appl. Sci. 2021, 11, 4496. [Google Scholar] [CrossRef]Babinec, A.; Jurišica, L.; Hubinský, P.; Duchoň, F. Visual Localization of Mobile Robot Using Artificial Markers. Procedia Eng. 2014, 96, 1–9. [Google Scholar] [CrossRef]Zhong, X.; Zhou, Y.; Liu, H. Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417693489. [Google Scholar] [CrossRef]Ortiz-Fernandez, L.E.; Cabrera-Avila, E.V.; Silva, B.M.F.; Gonçalves, L.M.G. Smart Artificial Markers for Accurate Visual Mapping and Localization. Sensors 2021, 21, 625. [Google Scholar] [CrossRef] [PubMed]Bertoni, M.; Michieletto, S.; Oboe, R.; Michieletto, G. Indoor Visual-Based Localization System for Multi-Rotor UAVs. Sensors 2022, 22, 5798. [Google Scholar] [CrossRef] [PubMed]Xing, B.; Zhu, Q.; Pan, F.; Feng, X. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles. Sensors 2018, 18, 1706. [Google Scholar] [CrossRef] [PubMed]Lebedev, I.; Erashov, A.; Shabanova, A. Accurate Autonomous UAV Landing Using Vision-Based Detection of ArUco-Marker. In Lecture Notes in Computer Science, Proceedings of the Interactive Collaborative Robotics, ICR 2020, St. Petersburg, Russia, 7–9 October 2020; Ronzhin, A., Rigoll, G., Meshcheryakov, R., Eds.; Springer: Cham, Switzerland, 2020; pp. 179–188. [Google Scholar]Wubben, J.; Fabra, F.; Calafate, C.T.; Krzeszowski, T.; Marquez-Barja, J.M.; Cano, J.-C.; Manzoni, P. Accurate Landing of Unmanned Aerial Vehicles Using Ground Pattern Recognition. Electronics 2019, 8, 1532. [Google Scholar] [CrossRef]Salagame, A.; Govindraj, S.; Omkar, S.N. Precision Landing of a UAV on a Moving Platform for Outdoor Applications. arXiv 2022, arXiv:2209.14436. [Google Scholar]Kim, Y.; Yang, S.H.; Yang, K.W.; Dagalakis, N.G. Design of MEMS vision tracking system based on a micro fiducial marker. Sens. Actuators A Phys. 2015, 234, 48–56. [Google Scholar] [CrossRef]Yang, M.; Yu, L.; Wong, C.; Mineo, C.; Yang, E.; Bomphray, I.; Huang, R. A cooperative mobile robot and manipulator system (Co-MRMS) for transport and lay-up of fibre plies in modern composite material manufacture. Int. J. Adv. Manuf. Technol. 2022, 119, 1249–1265. [Google Scholar] [CrossRef]Zhang, Y.; Liu, Y.; Xie, Z.; Liu, Y.; Cao, B.; Liu, H. Visual Servo Control of the Macro/Micro Manipulator with Base Vibration Suppression and Backlash Compensation. Appl. Sci. 2022, 12, 8386. [Google Scholar

留言 (0)

沒有登入
gif