With the aim to provide a solution to this problem, we have developed a mechanism for automatic self-configuration of distributed robotic systems, in which special components are able to establish, monitor, and, if needed, change the system configurations. This work has resulted in the following original outcomes: design of formal descriptions for software components that are typical of distributed robotic systems; design of formal task specifications that assimilate categories of system-level tasks to categories of system configurations; design and implementation of software that. This work has been inspired by ideas from the field of Semantic Web Services and by the heritage of reactive architectures in Robotics. In order to validate our design, we have implemented and tested the self-configuration mechanism on a specific type of distributed robotic system: the peis-Ecology. Reactive navigation of an Autonomous Vehicle in Underground Mines. Örebro University, örebro, sweden, march 2007.
Robot news and Robotics Info
In addition, simulation experiments were conducted to evaluate the performance of skemon using known metrics. The results show that using semantic knowledge can lead to high performance in monitoring better the execution of robot plans. A mechanism for Automatic Self-Configuration of Software components in Robot Ecologies. Örebro University, örebro, sweden, October 2008. Abstract: Distributed robotic systems are nowadays being applied to several domains, like ambient assisted living, elderly healthcare, or business museum guidance. The internal control structures of the heterogeneous devices that constitute these systems are often profitably organized in component-based software architectures. The strong added value of such distributed systems comes from their potential ability to automatically self-configure the interaction patterns of their constituting components. Dynamically reconfigurable component interactions within and across the various devices allow these systems to automatically change the form of their actions by founding new cooperations among their devices, or by destroying the old ones. Automatic self-configuration would dramatically increase the adaptability of such systems to new tasks or situations. At present time, no satisfactory solutions exist to the problem of how such distributed systems should automatically self-configure.
This development is essential to the applicability of our technique, since uncertainty is a pervasive feature in robotics. We present a general schema to deal with situations where perceptual information relevant to skemon is missing. The schema includes steps for modeling and generating a course of action to actively collect such information. We describe approaches based on planning and greedy action selection to generate the information-gathering solutions. The thesis also shows how such a schema can be applied to respond to failures occurring resumes before or while an action is executed. The failures we address are ambiguous situations that arise when the robot attempts to anchor symbolic descriptions (relevant to a plan action) in perceptual information. The work reported in this thesis has been tested and verified using a mobile robot navigating in an indoor environment.
In this thesis, we propose to use semantic domain-knowledge to derive and monitor implicit expectations about the real effects of actions. For instance, a robot entering a room asserted to be an office should expect to see at least a desk, a chair, and possibly. These expectations are derived from knowledge about the type of the room the robot is entering. If the robot enters a kitchen instead, then it should expect to see an oven, a sink, etc. The major contributions of this thesis are as follows. We define the notion of Semantic Knowledge-based Execution Monitoring skemon, and we propose a general algorithm for it based on the use of description logics for representing knowledge. We develop a probabilistic approach of semantic Knowledge-based execution monitoring to take into account uncertainty in both acting and sensing. Specifically, we allow for sensing to be unreliable and for action night models to have more than one possible outcome. We also take into consideration uncertainty about the state of the world.
Without range data, robot and sensor attitudes only distract attention from the video and the interface does not provide sufficient navigational cues. Robust Execution of Robot Task-Plans: a knowledge-based Approach. Örebro University, örebro, sweden, september 2008. University library entry, including pdf abstract: Autonomous mobile robots are being developed with the aim of accomplishing complex tasks in different environments, including human habitats as well as less friendly places, such as distant planets and underwater regions. A major challenge faced by such robots is to make sure that their actions are executed correctly and reliably, despite the dynamics and the uncertainty inherent in their working space. This thesis is concerned with the ability of a mobile robot to reliably monitor the execution of its plans and detect failures. Existing approaches for monitoring the execution of plans rely mainly on checking the explicit effects of plan actions,. E., effects encoded in the action model. This supposedly means that the effects to monitor are directly observable, but that is not always the case in a real-world environment.
M: God, robot ebook: Anthony marchetta
This work investigates the use of a spherical robot for remote inspection. Ball-shaped robots are inherently stable and robust, encapsulating all sensors and moving parts. They are, however, oscillation-prone and this causes some control and perception challenges. We take the disadvantages of spherical robots into consideration in the development of a tele-operation user interface for remote inspection. The interface must promote good situation awareness provided sensor data from an oscillating robot. It must also allow precise control over the spherical robot, which exhibits some peculiar motion patterns.
We consider remote inspection with a tele-operated spherical robot in an adjustable autonomy framework where control, perceptual, and interaction autonomy are treated separately. With no control or interaction autonomy present in the system, we try to improve situation awareness by visualizing the robot and sensor data in a 3d virtual lutz reality (vr and also by introducing low-level perceptual autonomy features in the user interface: image stabilization in virtual. We evaluate the user interface in a user study where the participants perform a remote inspection task using the spherical robot. The experiment compares (1) using the 3d vr visualization against using a conventional 2d visualization, and (2) having the panorama feature enabled or disabled. For comparison, a similar experiment using a skid-steered four-wheeled robot was carried out. The experiments show that there is a difference between the different visualization modes with the spherical robot: using the conventional 2d visualization is more efficient. Without integrated video and map information, the virtual 3d view of the robot apparently adds nothing of benefit to the user interface.
In the above example, emil is offering a perceptual functionality to pippi. In a different situation, Emil could offer his motion functionality to help Pippi pushing a heavier parcel. In this thesis, we propose an approach to automatically generate, at run time, a functional configuration of a distributed robot system to perform a given task in a given environment, and to dynamically change this configuration in response to failures. Our approach is based on artificial intelligence planning techniques, and it is provably sound, complete and optimal. In order to handle tasks that require more than one step (i.e., one configuration) to be accomplished, we also show how methods for automatic configuration can be integrated with methods for task planning in order to produce a complete plan were each step. For the scenario above, generating a complete plan before the execution starts enables Pippi to know beforehand if she can get the parcel or not.
We also propose an approach to merge configurations, which enables concurrent execution of configurations. Merging of configurations can be used for: parallel execution of a sequence of configurations for reducing execution time (as demonstrated in this thesis and guaranteeing safe execution of multiple configurations generated by the same or different configuration processes. We demonstrate the applicability of our approach on a specific type of distributed robot system, called peis-Ecology, and show experiments in which configurations, and sequences of configurations are automatically generated and executed on real robots. Further, we give an experiment were merged configurations are created and executed on simulated robots. Tele-Operated Remote Inspection with a spherical Robot. Örebro University, örebro, sweden, december 2009. Abstract: Robotic remote inspection, such as security surveillance or disaster area examination, can be efficient and at the same time protect humans from danger. However, conventional tele-operation interfaces impose a cognitive burden on the operator, who is typically not a robotics expert, and high-level information interpretation and decision-making is neglected.
Lbr iiwa, kuka
Why not let the robots do the same? Why not let robots help each other? Luckily for Pippi, there is another robot, named Emil, vacuum cleaning the floor in the same room. Since Emil has a video camera and can view both Pippi and the door at the same time, he can estimate pippiÕs position relative to the door and use this information to guide pippi through the door by wireless communication. In that way he can enable pippi to deliver the parcel to you. The goal diary of this thesis is to endow robots with the ability to help each other in a similar way. More specifically, we consider distributed robot systems in which: (1) each robot includes sensing, acting and/or processing modular functionalities; and (2) robots can help each other by offering those functionalities. A functional configuration is any way to allocate and connect functionalities among the robots. An interesting feature of a system of this type is the possibility to use different functional configurations to make the same set of robots perform different tasks, or to perform the same task under list different conditions.
Örebro University, örebro, sweden, may 2009. University library entry, including pdf, abstract: Imagine the following situation. You give your favorite robot, named Pippi, the task abdul to fetch a heavy parcel that just arrived at your front door. While pushing the parcel back to you, she must travel through a door. Unfortunately, the parcel she is pushing is blocking her camera, giving her a hard time to see the door. If she cannot see the door, she cannot safely push the parcel through. What would you as a human do in a similar situation? Most probably you would ask someone for help, someone to guide you through the door, as we ask for help when we need to park our car in a tight parking spot.
robot navigation. Actions (login required view Item). (Previously "Mobile robotics Lab our Completed Thesis, the following are the theses completed at the. Aass mobile robotics Lab. These include both PhD theses and "Licentiate" theses: the latter is a swedish degree which is obtained half way during the doctoral studies. Online versions of most theses can be obtained by following the links to the university library. For other theses, please write a note to the author, to the supervisor(s or to the, lab director. Robots that Help Each-Other: Self-Configuration of Distributed Robot Systems.
Pour combiner les avantages de la logique floue et l'apprentissage par renforcement, une stratégie de commande avec une capacité dapprentissage est utilisée, cest une extension de q-learning aux cas continus et une méthode d'optimisation des systèmes flous. L'avantage des systèmes flous est lintroduction des connaissances disponibles à priori pour que le comportement initial soit acceptable. Lefficacité des architectures proposées et étudiées sont démontrées par diverses applications de navigation autonome d'un robot online mobile. This thesis deals with the autonomous navigation problem of a mobile robot using hybrid neurofuzzy techniques. The objective of the presented work is to study and develop effective architectures for a reactive navigation of an autonomous mobile robot in unknown environment, by using on the one hand the behavior based approach, and on the other hand the learning paradigm. The techniques employed to tackle this problem are based on the fuzzy inference systems, artificial neural networks and the reinforcement learning. Firstly, we used fuzzy behavior based navigation approach for the reactive and local planning, then for tuning the fuzzy behaviors parameters; we introduced the hybrid neuro-fuzzy models for autonomous navigation. The second type of learning method is particularly adapted to mobile robotic, which makes it possible to find by a process of tests and errors, the executed optimal action for each situation which the robot will perceive in order to maximize its rewards. To combine the advantages of fuzzy logic and reinforcement learning, a control strategy with a learning capacity is used, it is an extension of Q-learning to the continuous spaces and considered as and optimization method of fuzzy systems.
Build your Own Underwater Robot and Other Wet
Cherroun, lakhmissi (2014 navigation Autonome dun Robot Mobile par des Techniques neuro-Floues. not defined thesis, faculté des sciences et de la technologie umkbiskra. Cette thèse traite le problème de navigation autonome d'un robot mobile par les techniques hybrides neuro-floues. Lobjectif de travail présenté est détudier et développer des architectures de commande efficaces pour une navigation réactive d'un robot mobile autonome dans un environnement inconnu, en utilisant dune part reviews lapproche comportementale et dautre part les méthodes de lapprentissage. Les techniques employées pour aborder ce problème sont basées sur les systèmes d'inférence flous, les réseaux de neurones artificiels et lapprentissage par renforcement. On a utilisé premièrement, les systèmes basés sur les comportements flous pour la planification locale et réactive, puis pour l'ajustement des paramètres des comportements flous, on a introduit les modèles hybrides neuro-flous pour la navigation autonome. La deuxième méthode dapprentissage est particulièrement adaptée à la robotique, qui permet de trouver, par un processus dessais et derreurs, laction optimale à effectuer pour chacune des situations que le robot va percevoir afin de maximiser ses récompenses.