Self-attentive vision in evolutionary robotics
- Authors: Botha, Bouwer
- Date: 2024-04
- Subjects: Evolutionary robotics , Robotics , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63628 , vital:73566
- Description: The autonomy of a robot refers to its ability to achieve a task in an environment with minimal human supervision. This may require autonomous solutions to be able to perceive their environment to inform their decisions. An inexpensive and highly informative way that robots can perceive the environment is through vision. The autonomy of a robot is reliant on the quality of the robotic controller. These controllers are the software interface between the robot and environment that determine the actions of the robot based on the perceived environment. Controllers are typically created using manual programming techniques, which become progressively more challenging with increasing complexity of both the robot and task. An alternative to manual programming is the use of machine learning techniques such as those used by Evolutionary Robotics (ER). ER is an area of research that investigates the automatic creation of controllers. Instead of manually programming a controller, an Evolutionary Algorithms can be used to evolve the controller through repeated interactions with the task environment. Employing the ER approach on camera-based controllers, however, has presented problems for conventional ER methods. Firstly, existing architectures that are capable of automatically processing images, have a large number of trained parameters. These architectures over-encumber the evolutionary process due to the large search space of possible configurations. Secondly, the evolution of complex controllers needs to be done in simulation, which requires either: (a) the construction of a photo-realistic virtual environment with accurate lighting, texturing and models or (b) potential reduction of the controller capability by simplifying the problem via image preprocessing. Any controller trained in simulation also raises the inherent concern of not being able to transfer to the real world. This study proposes a new technique for the evolution of camera-based controllers in ER, that aims to address the highlighted problems. The use of self-attention is proposed to facilitate the evolution of compact controllers that are able to evolve specialized sets of task-relevant features in unprocessed images by focussing on important image regions. Furthermore, a new neural network-based simulation approach, Generative Neuro-Augmented Vision (GNAV), is proposed to simplify simulation construction. GNAV makes use of random data collected in a simple virtual environment and the real world. A neural network is trained to overcome the visual discrepancies between these two environments. GNAV enables a controller to be trained in a simple simulated environment that appears similar to the real environment, while requiring minimal human supervision. The capabilities of the new technique were demonstrated using a series of real-world navigation tasks based on camera vision. Controllers utilizing the proposed self-attention mechanism were trained using GNAV and transferred to a real camera-equipped robot. The controllers were shown to be able to perform the same tasks in the real world. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Botha, Bouwer
- Date: 2024-04
- Subjects: Evolutionary robotics , Robotics , Neural networks (Computer science)
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/63628 , vital:73566
- Description: The autonomy of a robot refers to its ability to achieve a task in an environment with minimal human supervision. This may require autonomous solutions to be able to perceive their environment to inform their decisions. An inexpensive and highly informative way that robots can perceive the environment is through vision. The autonomy of a robot is reliant on the quality of the robotic controller. These controllers are the software interface between the robot and environment that determine the actions of the robot based on the perceived environment. Controllers are typically created using manual programming techniques, which become progressively more challenging with increasing complexity of both the robot and task. An alternative to manual programming is the use of machine learning techniques such as those used by Evolutionary Robotics (ER). ER is an area of research that investigates the automatic creation of controllers. Instead of manually programming a controller, an Evolutionary Algorithms can be used to evolve the controller through repeated interactions with the task environment. Employing the ER approach on camera-based controllers, however, has presented problems for conventional ER methods. Firstly, existing architectures that are capable of automatically processing images, have a large number of trained parameters. These architectures over-encumber the evolutionary process due to the large search space of possible configurations. Secondly, the evolution of complex controllers needs to be done in simulation, which requires either: (a) the construction of a photo-realistic virtual environment with accurate lighting, texturing and models or (b) potential reduction of the controller capability by simplifying the problem via image preprocessing. Any controller trained in simulation also raises the inherent concern of not being able to transfer to the real world. This study proposes a new technique for the evolution of camera-based controllers in ER, that aims to address the highlighted problems. The use of self-attention is proposed to facilitate the evolution of compact controllers that are able to evolve specialized sets of task-relevant features in unprocessed images by focussing on important image regions. Furthermore, a new neural network-based simulation approach, Generative Neuro-Augmented Vision (GNAV), is proposed to simplify simulation construction. GNAV makes use of random data collected in a simple virtual environment and the real world. A neural network is trained to overcome the visual discrepancies between these two environments. GNAV enables a controller to be trained in a simple simulated environment that appears similar to the real environment, while requiring minimal human supervision. The capabilities of the new technique were demonstrated using a series of real-world navigation tasks based on camera vision. Controllers utilizing the proposed self-attention mechanism were trained using GNAV and transferred to a real camera-equipped robot. The controllers were shown to be able to perform the same tasks in the real world. , Thesis (MSc) -- Faculty of Science, School of Computer Science, Mathematics, Physics and Statistics, 2024
- Full Text:
- Date Issued: 2024-04
Supporting competitive robot game mission planning using machine learning
- Authors: Strydom, Elton
- Date: 2024-04
- Subjects: Machine learning , High performance computing , Robotics , LEGO Mindstorms toys Computer programming
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64841 , vital:73929
- Description: This dissertation presents a study aimed at supporting the strategic planning and execution of missions in competitive robot games, particularly in the FIRST LEGO® League (FLL), through the use of machine learning techniques. The primary objective is to formulate guidelines for evaluating mission strategies using machine learning techniques within the FLL landscape, thereby supporting participants in the mission strategy design journey within the FLL robot game. The research methodology encompasses a literature review, focusing on the current practices in the FLL mission strategy design process. This is followed by a literature review of machine learning techniques on a broad level pivoting towards evolutionary algorithms. The study then delves into the specifics of genetic algorithms, exploring their suitability and potential advantages for mission strategy evaluation in competitive robotic environments within the FLL robot game. A significant portion of the research involves the development and testing of a prototype system that applies a genetic algorithm to simulate and evaluate different mission strategies, providing a practical tool for FLL teams. During the development of the evaluation prototype, guidelines were formulated aligning with the primary research objective which is to formulate guidelines for evaluating mission strategies in robot games using machine learning techniques. Key findings of this study highlight the effectiveness of genetic algorithms in identifying optimal mission strategies. The prototype demonstrates the feasibility of using machine learning to provide real-time, feedback to participating teams, enabling more informed decision-making in the formulation of mission strategies. , Thesis (MIT) -- Faculty of Engineering, the Built Environment, and Technology, School of Information Technology, 2024
- Full Text:
- Date Issued: 2024-04
- Authors: Strydom, Elton
- Date: 2024-04
- Subjects: Machine learning , High performance computing , Robotics , LEGO Mindstorms toys Computer programming
- Language: English
- Type: Master's theses , text
- Identifier: http://hdl.handle.net/10948/64841 , vital:73929
- Description: This dissertation presents a study aimed at supporting the strategic planning and execution of missions in competitive robot games, particularly in the FIRST LEGO® League (FLL), through the use of machine learning techniques. The primary objective is to formulate guidelines for evaluating mission strategies using machine learning techniques within the FLL landscape, thereby supporting participants in the mission strategy design journey within the FLL robot game. The research methodology encompasses a literature review, focusing on the current practices in the FLL mission strategy design process. This is followed by a literature review of machine learning techniques on a broad level pivoting towards evolutionary algorithms. The study then delves into the specifics of genetic algorithms, exploring their suitability and potential advantages for mission strategy evaluation in competitive robotic environments within the FLL robot game. A significant portion of the research involves the development and testing of a prototype system that applies a genetic algorithm to simulate and evaluate different mission strategies, providing a practical tool for FLL teams. During the development of the evaluation prototype, guidelines were formulated aligning with the primary research objective which is to formulate guidelines for evaluating mission strategies in robot games using machine learning techniques. Key findings of this study highlight the effectiveness of genetic algorithms in identifying optimal mission strategies. The prototype demonstrates the feasibility of using machine learning to provide real-time, feedback to participating teams, enabling more informed decision-making in the formulation of mission strategies. , Thesis (MIT) -- Faculty of Engineering, the Built Environment, and Technology, School of Information Technology, 2024
- Full Text:
- Date Issued: 2024-04
Static and bootstrapped neuro-simulation for complex robots in evolutionary robotics
- Authors: Woodford, Grant Warren
- Date: 2019
- Subjects: Robotics
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/44656 , vital:38172
- Description: Evolutionary Robotics (ER) is a field of study focused on the automatic development of controllers and robot morphologies. Evolving controllers on real-world hardware is time-consuming and can damage hardware through wear. Robotic simulators can be used as an alternative to a real-world robot in order to speed up the ER process. Most simulation techniques in practice use physics-based models that rely on an understanding of the robotic system in question. Developing effective physics-based simulators is time consuming and requires a significant level of specialised knowledge. A lengthy simulator development and tuning process is typically required before the ER process can begin. Artificial Neural Networks simulators (SNNs) can be used as an alternative to a physics based simulation approach. SNNs are simple to construct, do not require significant levels of prior knowledge of the robotic system, are computationally efficient and can be highly accurate. Two types of ER approaches utilising SNNs exist. The Static Neuro-Simulation (SNS) approach involves developing SNNs before the ER process where these SNNs are used instead of a physics-based simulator. Alternatively, SNNs can be developed during the ER process, called the Bootstrapped Neuro-Simulation (BNS) approach. Prior work investigating SNNs has largely been limited to simple robots. A complex robot has many degrees of freedom and ifa low-level controller design is used, the solution search space is high-dimensional and difficult to traverse. Prior work investigating the SNS and BNS approaches have mostly relied on simplified controller designs which rely on built-in prior knowledge of intended robot behaviours. This research uses low-level controller designs which in turn rely on low level simulators. Most ER studies are conducted on a single type of robot morphology. This research investigates the SNS and BNS approaches on two significantly different classes of robots. A Hexapod and Snake robot are used to study the SNS and BNS approaches. The Hexapod robot exhibits limbed, walking behaviours. The Snake robot is limbless and generates crawling behaviours. Demonstrating the viability of the SNS and BNS approaches for two different classes of robots provides strong evidence that the tested approaches are likely viable on other classes of robots. Various proposed improvements to the SNS and BNS approaches are investigated. The Results demonstrate that the SNS and BNS approaches are viable when applied to Hexapod and Snake robots without restricting controller designs to those with significant levels of built-in prior knowledge of robot behaviours. SNNs configured in ensembles improve the likely performance outcomes of solutions. The expected benefit of adding simulator noise during the evolutionary process were not as pronounced for problems investigated in this work.
- Full Text:
- Date Issued: 2019
- Authors: Woodford, Grant Warren
- Date: 2019
- Subjects: Robotics
- Language: English
- Type: Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10948/44656 , vital:38172
- Description: Evolutionary Robotics (ER) is a field of study focused on the automatic development of controllers and robot morphologies. Evolving controllers on real-world hardware is time-consuming and can damage hardware through wear. Robotic simulators can be used as an alternative to a real-world robot in order to speed up the ER process. Most simulation techniques in practice use physics-based models that rely on an understanding of the robotic system in question. Developing effective physics-based simulators is time consuming and requires a significant level of specialised knowledge. A lengthy simulator development and tuning process is typically required before the ER process can begin. Artificial Neural Networks simulators (SNNs) can be used as an alternative to a physics based simulation approach. SNNs are simple to construct, do not require significant levels of prior knowledge of the robotic system, are computationally efficient and can be highly accurate. Two types of ER approaches utilising SNNs exist. The Static Neuro-Simulation (SNS) approach involves developing SNNs before the ER process where these SNNs are used instead of a physics-based simulator. Alternatively, SNNs can be developed during the ER process, called the Bootstrapped Neuro-Simulation (BNS) approach. Prior work investigating SNNs has largely been limited to simple robots. A complex robot has many degrees of freedom and ifa low-level controller design is used, the solution search space is high-dimensional and difficult to traverse. Prior work investigating the SNS and BNS approaches have mostly relied on simplified controller designs which rely on built-in prior knowledge of intended robot behaviours. This research uses low-level controller designs which in turn rely on low level simulators. Most ER studies are conducted on a single type of robot morphology. This research investigates the SNS and BNS approaches on two significantly different classes of robots. A Hexapod and Snake robot are used to study the SNS and BNS approaches. The Hexapod robot exhibits limbed, walking behaviours. The Snake robot is limbless and generates crawling behaviours. Demonstrating the viability of the SNS and BNS approaches for two different classes of robots provides strong evidence that the tested approaches are likely viable on other classes of robots. Various proposed improvements to the SNS and BNS approaches are investigated. The Results demonstrate that the SNS and BNS approaches are viable when applied to Hexapod and Snake robots without restricting controller designs to those with significant levels of built-in prior knowledge of robot behaviours. SNNs configured in ensembles improve the likely performance outcomes of solutions. The expected benefit of adding simulator noise during the evolutionary process were not as pronounced for problems investigated in this work.
- Full Text:
- Date Issued: 2019
Vision-guided tracking of complex tree-dimensional seams for robotic gas metal arc welding
- Authors: Hamed, Maien
- Date: 2011
- Subjects: Robot vision , Robotics , Gas metal arc welding
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: vital:9646 , http://hdl.handle.net/10948/1428 , Robot vision , Robotics , Gas metal arc welding
- Description: Automation of welding systems is often restricted by the requirements of spatial information of the seams to be welded. When this cannot be obtained from the design of the welded parts and maintained using accurate xturing, the use of a seam teaching or tracking system becomes necessary. Optical seam teaching and tracking systems have many advantages compared to systems implemented with other sensor families. Direct vision promises to be a viable strategy for implementing optical seam tracking, which has been mainly done with laser vision. The current work investigated direct vision as a strategy for optical seam teaching and tracking. A robotic vision system has been implemented, consisting of an articulated robot, a hand mounted camera and a control computer. A description of the calibration methods and the seam and feature detection and three-dimensional scene reconstruction is given. The results showed that direct vision is a suitable strategy for seam detection and learning. A discussion of generalizing the method used as an architecture for simultanious system calibration and measurement estimation is provided.
- Full Text:
- Date Issued: 2011
- Authors: Hamed, Maien
- Date: 2011
- Subjects: Robot vision , Robotics , Gas metal arc welding
- Language: English
- Type: Thesis , Masters , MEngineering (Mechatronics)
- Identifier: vital:9646 , http://hdl.handle.net/10948/1428 , Robot vision , Robotics , Gas metal arc welding
- Description: Automation of welding systems is often restricted by the requirements of spatial information of the seams to be welded. When this cannot be obtained from the design of the welded parts and maintained using accurate xturing, the use of a seam teaching or tracking system becomes necessary. Optical seam teaching and tracking systems have many advantages compared to systems implemented with other sensor families. Direct vision promises to be a viable strategy for implementing optical seam tracking, which has been mainly done with laser vision. The current work investigated direct vision as a strategy for optical seam teaching and tracking. A robotic vision system has been implemented, consisting of an articulated robot, a hand mounted camera and a control computer. A description of the calibration methods and the seam and feature detection and three-dimensional scene reconstruction is given. The results showed that direct vision is a suitable strategy for seam detection and learning. A discussion of generalizing the method used as an architecture for simultanious system calibration and measurement estimation is provided.
- Full Text:
- Date Issued: 2011
Artificial neural networks as simulators for behavioural evolution in evolutionary robotics
- Pretorius, Christiaan Johannes
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
- Authors: Pretorius, Christiaan Johannes
- Date: 2010
- Subjects: Neural networks (Computer science) , Robotics
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:10462 , http://hdl.handle.net/10948/1476 , Neural networks (Computer science) , Robotics
- Description: Robotic simulators for use in Evolutionary Robotics (ER) have certain challenges associated with the complexity of their construction and the accuracy of predictions made by these simulators. Such robotic simulators are often based on physics models, which have been shown to produce accurate results. However, the construction of physics-based simulators can be complex and time-consuming. Alternative simulation schemes construct robotic simulators from empirically-collected data. Such empirical simulators, however, also have associated challenges, such as that some of these simulators do not generalize well on the data from which they are constructed, as these models employ simple interpolation on said data. As a result of the identified challenges in existing robotic simulators for use in ER, this project investigates the potential use of Artificial Neural Networks, henceforth simply referred to as Neural Networks (NNs), as alternative robotic simulators. In contrast to physics models, NN-based simulators can be constructed without needing an explicit mathematical model of the system being modeled, which can simplify simulator development. Furthermore, the generalization capabilities of NNs suggest that NNs could generalize well on data from which these simulators are constructed. These generalization abilities of NNs, along with NNs’ noise tolerance, suggest that NNs could be well-suited to application in robotics simulation. Investigating whether NNs can be effectively used as robotic simulators in ER is thus the endeavour of this work. Since not much research has been done in employing NNs as robotic simulators, many aspects of the experimental framework on which this dissertation reports needed to be carefully decided upon. Two robot morphologies were selected on which the NN simulators created in this work were based, namely a differentially steered robot and an inverted pendulum robot. Motion tracking and robotic sensor logging were used to acquire data from which the NN simulators were constructed. Furthermore, custom code was written for almost all aspects of the study, namely data acquisition for NN training, the actual NN training process, the evolution of robotic controllers using the created NN simulators, as well as the onboard robotic implementations of evolved controllers. Experimental tests performed in order to determine ideal topologies for each of the NN simulators developed in this study indicated that different NN topologies can lead to large differences in training accuracy. After performing these tests, the training accuracy of the created simulators was analyzed. This analysis showed that the NN simulators generally trained well and could generalize well on data not presented during simulator construction. In order to validate the feasibility of the created NN simulators in the ER process, these simulators were subsequently used to evolve controllers in simulation, similar to controllers developed in related studies. Encouraging results were obtained, with the newly-evolved controllers allowing real-world experimental robots to exhibit obstacle avoidance and light-approaching behaviour with a reasonable degree of success. The created NN simulators furthermore allowed for the successful evolution of a complex inverted pendulum stabilization controller in simulation. It was thus clearly established that NN-based robotic simulators can be successfully employed as alternative simulation schemes in the ER process.
- Full Text:
- Date Issued: 2010
Design and implementation of robotic control for industrial applications
- Authors: Will, Desmond Jeffrey
- Date: 2004
- Subjects: Robotics
- Language: English
- Type: Thesis , Masters , MTech (Electrical Engineering)
- Identifier: vital:10819 , http://hdl.handle.net/10948/213 , Robotics
- Description: Background: With the pressing need for increased productivity and delivery of end products of uniform quality, industry is turning more and more to computer-based automation. At the present time, most of industrial automated manufacturing is carried out by specialpurpose machines, designed to perform specific functions in a manufacturing process. The inflexibility and generally high cost of these machines often referred to as hard automation systems, have led to a broad-based interest in the use of robots capable of performing a variety of manufacturing functions in a more flexible working environment and at lower production costs. A robot is a reprogrammable general-purpose manipulator with external sensors that can perform various assembly tasks. A robot may possess intelligence, which is normally due to computer algorithms associated with its controls and sensing systems. Industrial robots are general-purpose, computer-controlled manipulators consisting of several rigid links connected in series by revolute or prismatic joints. Most of today’s industrial robots, though controlled by mini and microcomputers are basically simple positional machines. They execute a given task by playing back a prerecorded or preprogrammed sequence of motion that has been previously guided or taught by the hand-held control teach box. Moreover, these robots are equipped with little or no external sensors for obtaining the information vital to its working environment. As a result robots are used mainly for relatively simple, repetitive tasks. More research effort has been directed in sensory feedback systems, which has resulted in improving the overall performance of the manipulator system. An example of a sensory feedback system would be: a vision Charge-Coupled Device (CCD) system. This can be utilized to manipulate the robot position dependant on the surrounding robot environment (various object profile sizes). This vision system can only be used within the robot movement envelope
- Full Text:
- Date Issued: 2004
- Authors: Will, Desmond Jeffrey
- Date: 2004
- Subjects: Robotics
- Language: English
- Type: Thesis , Masters , MTech (Electrical Engineering)
- Identifier: vital:10819 , http://hdl.handle.net/10948/213 , Robotics
- Description: Background: With the pressing need for increased productivity and delivery of end products of uniform quality, industry is turning more and more to computer-based automation. At the present time, most of industrial automated manufacturing is carried out by specialpurpose machines, designed to perform specific functions in a manufacturing process. The inflexibility and generally high cost of these machines often referred to as hard automation systems, have led to a broad-based interest in the use of robots capable of performing a variety of manufacturing functions in a more flexible working environment and at lower production costs. A robot is a reprogrammable general-purpose manipulator with external sensors that can perform various assembly tasks. A robot may possess intelligence, which is normally due to computer algorithms associated with its controls and sensing systems. Industrial robots are general-purpose, computer-controlled manipulators consisting of several rigid links connected in series by revolute or prismatic joints. Most of today’s industrial robots, though controlled by mini and microcomputers are basically simple positional machines. They execute a given task by playing back a prerecorded or preprogrammed sequence of motion that has been previously guided or taught by the hand-held control teach box. Moreover, these robots are equipped with little or no external sensors for obtaining the information vital to its working environment. As a result robots are used mainly for relatively simple, repetitive tasks. More research effort has been directed in sensory feedback systems, which has resulted in improving the overall performance of the manipulator system. An example of a sensory feedback system would be: a vision Charge-Coupled Device (CCD) system. This can be utilized to manipulate the robot position dependant on the surrounding robot environment (various object profile sizes). This vision system can only be used within the robot movement envelope
- Full Text:
- Date Issued: 2004
Intelligent gripper design and application for automated part recognition and gripping
- Authors: Wang, Jianqiang
- Date: 2002
- Subjects: Automatic control , Robots, Industrial , Robotics
- Language: English
- Type: Thesis , Doctoral , DTech (Engineering)
- Identifier: vital:10816 , http://hdl.handle.net/10948/102 , Automatic control , Robots, Industrial , Robotics
- Description: Intelligent gripping may be achieved through gripper design, automated part recognition, intelligent algorithm for control of the gripper, and on-line decision-making based on sensory data. A generic framework which integrates sensory data, part recognition, decision-making and gripper control to achieve intelligent gripping based on ABB industrial robot is constructed. The three-fingered gripper actuated by a linear servo actuator designed and developed in this project for precise speed and position control is capable of handling a large variety of objects. Generic algorithms for intelligent part recognition are developed. Edge vector representation is discussed. Object geometric features are extracted. Fuzzy logic is successfully utilized to enhance the intelligence of the system. The generic fuzzy logic algorithm, which may also find application in other fields, is presented. Model-based gripping planning algorithm which is capable of extracting object grasp features from its geometric features and reasoning out grasp model for objects with different geometry is proposed. Manipulator trajectory planning solves the problem of generating robot programs automatically. Object-oriented programming technique based on Visual C++ MFC is used to constitute the system software so as to ensure the compatibility, expandability and modular programming design. Hierarchical architecture for intelligent gripping is discussed, which partitions the robot’s functionalities into high-level (modeling, recognizing, planning and perception) layers, and low-level (sensing, interfacing and execute) layers. Individual system modules are integrated seamlessly to constitute the intelligent gripping system.
- Full Text:
- Date Issued: 2002
- Authors: Wang, Jianqiang
- Date: 2002
- Subjects: Automatic control , Robots, Industrial , Robotics
- Language: English
- Type: Thesis , Doctoral , DTech (Engineering)
- Identifier: vital:10816 , http://hdl.handle.net/10948/102 , Automatic control , Robots, Industrial , Robotics
- Description: Intelligent gripping may be achieved through gripper design, automated part recognition, intelligent algorithm for control of the gripper, and on-line decision-making based on sensory data. A generic framework which integrates sensory data, part recognition, decision-making and gripper control to achieve intelligent gripping based on ABB industrial robot is constructed. The three-fingered gripper actuated by a linear servo actuator designed and developed in this project for precise speed and position control is capable of handling a large variety of objects. Generic algorithms for intelligent part recognition are developed. Edge vector representation is discussed. Object geometric features are extracted. Fuzzy logic is successfully utilized to enhance the intelligence of the system. The generic fuzzy logic algorithm, which may also find application in other fields, is presented. Model-based gripping planning algorithm which is capable of extracting object grasp features from its geometric features and reasoning out grasp model for objects with different geometry is proposed. Manipulator trajectory planning solves the problem of generating robot programs automatically. Object-oriented programming technique based on Visual C++ MFC is used to constitute the system software so as to ensure the compatibility, expandability and modular programming design. Hierarchical architecture for intelligent gripping is discussed, which partitions the robot’s functionalities into high-level (modeling, recognizing, planning and perception) layers, and low-level (sensing, interfacing and execute) layers. Individual system modules are integrated seamlessly to constitute the intelligent gripping system.
- Full Text:
- Date Issued: 2002
- «
- ‹
- 1
- ›
- »