In recent years, smart workplaces that adapt to human activity and motion behavior have been proposed for cognitive production systems. In this respect, methods for identifying the feelings and activities of human workers are being investigated to improve the cognitive capability of smart machines such as robots in shared working spaces. Recognizing human activities and predicting the possible next sequence of operations may simplify robot programming and improve collaboration efficiency. However, human activity recognition still requires explainable models that are versatile, robust, and interpretative. Therefore, recognizing and analyzing human action details using continuous probability density estimates in different workplace layouts is essential. Three scenarios are considered: a standalone, a one-piece flow U-form, and a human-robot hybrid workplace. This work presents a novel approach to human activity recognition based on a probabilistic spatial partition (HAROPP). Its performance is compared to the geometric-bounded activity recognition method. Results show that spatial partitions based on probabilistic density contain 20% fewer data frames and 10% more spatial areas than the geometric bounding box. The approach, on average, detects human activities correctly for 81% of the cases for a pre-known workplace layout. HAROPP has scalability and applicability potential for cognitive workplaces with a digital twin in the loop for pushing the cognitive capabilities of machine systems and realizing human-centered environments.
In industry, manual work is becoming increasingly important despite high labor costs due to the trend towards smaller productionProduction volumes and a higher number of variants. In order to identify optimization potential and increase productivity, there is a strong need to analyze and understand manual processes. Especially in Small and Medium-sized Enterprises (SME), which have limited resources for classic process time analysis, these are rarely available. In this work, a method for automatic generation of process time analyses of manual assemblyManual assembly processes is presented. It employs an industrial human activity recognitionHuman activity recognition system. Human motion data is captured in the industrial environment for manual assemblyManual assembly operation. It is post processed using a spatial partitioning approach. The study shows about 76% of the manual operations in the proposed use case scenario are automatically detected and the remaining are hardly identified. It is shown that process time analysis can be carried out without expert knowledge and without significant manual effort and provides knowledge about the process, which can be used to identify optimization potentials to increase productivity.
Today, increased demands for personalized products are making human-robot collaborative tasks a focus of research mainly for improving production cycle time, precision, and accuracy. It is also required to simplify how human-robot tasks and motions are generated. A graphical flow control-based programming can be one of such methods. This work investigates whether the graphical approaches (e.g., using RAFCON) yield a better real-time simulation or not compared to agent approaches (e.g., using MOSIM-AJAN). This work may support the agility of the digital manufacturing process by enhancing the efficiency of human-robot collaboration.
In virtual production planning, simulating human motions helps to improve process planning and interaction efficiency. However, simulating multiple humans sharing tasks in a shared workplace requires understanding how human workers interact and share autonomy. In this regard, an Inertial Measurement Unit based motion capture is employed for understanding shifting roles and learning effects. Parameters such as total time, distance, and acceleration variances in repetition are considered for modeling collaborative motion interactions. The results distinguish motion patterns versus the undertaken interactions. This work may serve as an initial input to model interaction schemes and recognize human actions behavior during team assembly. Furthermore, the concept can be extended toward a human-robot shared autonomy.
Flexible and varied manual assembly processes in the automotive industry are often based on manual labor. While simulation can be used to improve planning to maximize efficiency while minimizing ergonomic issues for workers, common simulation tools require extensive modeling time. In such simulations, the users are often process engineers who want to easily create complex human motion simulations. This paper presents a concept developed to create complex human motions for interacting with objects in a production environment with little effort. The concept separates between geometric constraints and the semantic meaning of the respective geometry. With a set of data types developed for this purpose based on a unified ontology, a range of geometric and semantic information can be specified for arbitrary objects. In this way, an action-specific motion generator can be used to define the appropriate motion for the interaction with an object depending on the action without defining case-specific constraints. For a first proof, the concept is tested and demonstrated in the assembly of a pedal car and sitting on a chair at a manual workstation. Based on the use case, the effect of effort reduction is shown.
CIRP CMS
Latent Space Based Collaborative Motion Modeling from Motion Capture Data for Human Robot Collaboration
Tuli, Tadele Belay,
Henkel, Martin,
and Manns, Martin
Collaborative assembly operation is one of the current challenges regarding human robot collaboration (HRC). In this context, it is still unclear how robots and humans should behave in handling an object and anticipating mutual care. In many cases, the modeling of collaborative behaviors shows difficulties, which can be addressesd by simplifying motion modeling techniques. In the current work, we propose a latent space approach that combines functional principal component analysis to derive low dimensional features with Gaussian mixture models to generate high-likelihood motion behavior estimates. This approach may increase agility in task planning and reduce programming difficulties in HRC.
CPSL
Industrial Human Activity Prediction and Detection Using Sequential Memory Networks
Tuli, Tadele Belay,
Patel, Valay Mukesh,
and Manns, Martin
Prediction of human activity and detection of subsequent actions is crucial for improving the interaction between humans and robots during collaborative operations. Deep-learning techniques are being applied to recognize human activities, including industrial applications. However, the lack of sufficient dataset in the industrial domain and complexities of some industrial activities such as screw driving, assembling small parts, and others affect the model development and testing of human activities. The InHard dataset (Industrial Human Activity Recognition Dataset) was recently published to facilitate industrial human activity recognition for better human-robot collaboration, which still lacks extended evaluation. We propose an activity recognition method using a combined convolutional neural network (CNN) and long short-term memory (LSTM) techniques to evaluate the InHard dataset and compare it with a new dataset captured in a lab environment. This method improves the success rate of activity recognition by processing temporal and spatial information. Accordingly, the accuracy of the dataset is tested using labeled lists of activities from IMU and video data. A model is trained and tested for nine low-level activity classes with approximately 400 samples per class. The test result shows 88% accuracy for IMU-based skeleton data, 77% for RGB spatial video, and 63% for RGB video-based skeleton. The result has been verified using a previously published region-based activity recognition. The proposed approach can be extended to push the cognition capability of robots in human-centric workplaces.
Zenodo
HARNets: Human Activity Recognition Networks Based on Python Programming Language
Tuli, Tadele Belay,
Patel, Valay Mukesh,
and Manns, Martin
In human–robot collaboration (HRC), human motion capture can be considered an enabler for switching autonomy between humans and robots to create efficient and safe operations. For this purpose, wearable motion tracking systems such as IMU and lighthouse-based systems have been used to transfer human joint motions into robot controller models. Due to reasons such as global positioning, drift, and occlusion, in some situations, e.g., HRC, both systems have been combined. However, it is still not clear if the motion quality (e.g., smoothness, naturalness, and spatial accuracy) is sufficient when the human operator is in the loop. This article presents a novel approach for measuring human motion quality and accuracy in HRC. The human motion capture has been implemented in a laboratory environment with a repetition of forty-cycle operations. Human motion, specifically of the wrist, is guided by the robot tool center point (TCP), which is predefined for generating circular and square motions. Compared to the robot TCP motion considered baseline, the hand wrist motion deviates up to 3 cm. The approach is valuable for understanding the quality of human motion behaviors and can be scaled up for various applications involving human and robot shared workplaces.
Simulating human motion behavior in assembly operations helps to create efficient collaboration plans for humans and robots. However, identifying human intention may require high quality human motion capture data in order to discriminate micro-actions and human attention. In this regard, a human motion capture setup that combines various systems such as joint body, finger, and eye trackers is proposed in combination with a methodology of identifying the intention of human operators as well as for predicting sequences of activities. The approach may lead to safer human-robot collaboration.
IEEE ETFA
Knowledge-Based Digital Twin for Predicting Interactions in Human-Robot Collaboration
Semantic representation of motions in a human-robot collaborative environment is essential for agile design and development of digital twins (DT) towards ensuring efficient collaboration between humans and robots in hybrid work systems, e.g., in assembly operations. Dividing activities into actions helps to further conceptualize motion models for predicting what a human intends to do in a hybrid work system. However, it is not straightforward to identify human intentions in collaborative operations for robots to understand and collaborate. This paper presents a concept for semantic representation of human actions and intention prediction using a flexible task ontology interface in the semantic data hub stored in a domain knowledge base. This semantic data hub enables the construction of a DT with corresponding reasoning and simulation algorithms. Furthermore, a knowledge-based DT concept is used to analyze and verify the presented use-case of Human-Robot Collaboration in assembly operations. The preliminary evaluation showed a promising reduction of time for assembly tasks, which identifies the potential to i) improve efficiency reflected by reducing costs and errors and ultimately ii) assist human workers in improving decision making. Thus the contribution of the current work involves a marriage of machine learning, robotics, and ontology engineering into DT to improve human-robot interaction and productivity in a collaborative production environment.
wtOnline
Virtuelle Montageplanung mit Motion Capture Systemen/Virtual assembly planning with motion capture systems
Jonek, Michael,
Manns, Martin,
and Tuli, Tadele Belay
In der Planung von teilautomatisierten Montageprozessen ist ein wichtiges Ziel, nicht wertschöpfende Tätigkeiten wie Laufbewegungen zu vermeiden. Studien haben gezeigt, dass die tatsächlichen Laufbewegungen in Montageprozessen von den geplanten Bewegungen abweichen. Dieser Beitrag stellt eine Methode vor, tatsächliche Laufbewegungen mit Motion Capture zu erfassen und in die Laufwegsplanung einzubeziehen, sodass sich Prozess- und Arbeitsplatzgestaltung bereits frühzeitig optimieren lassen.
In planning of semi-automated assembly processes, an important aspect is to avoid non-value-adding activities such as walking movements. Studies have shown that the actual walking movements in assembly processes differ from the planned movements. This paper presents a method of capturing actual walking movements with motion capture and integrating them into walking path planning so that process and workplace design can be optimized at an early stage.
CIRP CMS
Path planning for simulating human motions in manual assembly operations
Tuli, Tadele Belay,
Manns, Martin,
Zöller, Christian,
and Klein, Daniel
Assembly operation simulation can e.g. be used for optimizing complex manual assembly processes in the automotive industry. However, realistic human motion modeling involves difficult tasks such as inserting objects that require constraint definitions and collision detection. In this work, a method that applies the concept of an optimized potential field is employed to generate a motion model unit for simulating human motion for the assembly of pedal cars. This motion model unit is implemented into the newly created MOSIM motion modeling interfaces that is currently being standardized. This approach may allow motion simulation without being tied to a specific 3D environment.
Due to the change from mass production to mass personalized production and the resulting intrinsic product flexibility, the automotive industry, among others, is looking for cost-efficient and resource-saving production methods to combining global just-in-time production. In addition to geometric manufacturing flexibility, additive manufacturing offers a resource-saving application for rapid prototyping and small series in predevelopment. In this study, the FDM process is utilized to manufacture the tooling to draw a small series of sheet metal parts in combination with the rubber pad forming process. Therefore, a variety of common AM polymer materials (PETG, PLA, and ABS) is compared in compression tests, from which PLA is selected to be applied as sheet metal forming die. For the rubber pad forming process, relevant processing parameters, i.e., press force and rubber cushion hardness, are studied with respect to forming depth. The product batch is examined by optical evaluation using a metrological system. The scans of the tool and sheet metal parts confirm the mechanical integrity of the additively manufactured die from polymer and thus the suitability of this approach for small series in sheet metal drawing processes, e.g., for automotive applications.
Human–robot interaction has extended its application horizon to simplify how human beings interact with each other through a remotely controlled telepresence robot. The fast growth of communication technologies such as 4G and 5G has elevated the potential to establish stable audio–video-data transmission. However, human–robot physical interactions are still challenging regarding maneuverability, controllability, stability, drive layout, and autonomy. Hence, this paper presents a systematic design and control approach based on the customer’s needs and expectations of telepresence mobile robots for social interactions. A system model and controller design are developed using the Lagrangian method and linear quadratic regulator, respectively, for different scenarios such as flat surface, inclined surface, and yaw (steering). The robot system is capable of traveling uphill (30\{^{}circ }\\∘) and has a variable height (600–1200 mm). The robot is advantageous in developing countries to fill the skill gaps as well as for sharing knowledge and expertise using a virtual and mobile physical presence.
VDI-Z
Roboterunterstütztes Biegen von Verbundrohren
Engel, Bernd,
Manns, Martin,
Tuli, Tadele Belay,
and Reuter, Jonas
Das Biegen eines endlosfaserverstärkten, thermoplastischen
Verbundrohres mit Roboterunterstützung könnte die Effizienz des
Biegeprozesses und die Qualität des Endprodukts verbessern.
Nachfolgend wird die Kooperation von Robotern und Biegemaschinen beim Bearbeiten dieser Art von Rohren beschrieben.
2019
ECSA
Real-Time Motion Tracking for Humans and Robots in a Collaborative Assembly Task
Human-robot collaboration combines the extended capabilities of humans and robots to create a more inclusive and human-centered production system in the future. However, human safety is the primary concern for manufacturing industries. Therefore, real-time motion tracking is necessary to identify if the human worker body parts enter the restricted working space solely dedicated to the robot. Tracking these motions using decentralized and different tracking systems requires a generic model controller and consistent motion exchanging formats. In this work, our task is to investigate a concept for a unified real-time motion tracking for human-robot collaboration. In this regard, a low cost and game-based motion tracking system, e.g., HTC Vive, is utilized to capture human motion by mapping into a digital human model in the Unity3D environment. In this context, the human model is described using a biomechanical model that comprises joint segments defined by position and orientation. Concerning robot motion tracking, a unified robot description format is used to describe the kinematic trees. Finally, a concept of assembly operation that involves snap joining is simulated to analyze the performance of the system in real-time capability. The distribution of joint variables in spatial-space and time-space is analyzed. The results suggest that real-time tracking in human-robot collaborative assembly environments can be considered to maximize the safety of the human worker. However, the accuracy and reliability of the system regarding system disturbances need to be justified.
CIRP CMS
Hierarchical motion control for real time simulation of industrial robots
Multi axis machine tools implement real time motion control algorithms. Objects including difficult to manufacture and deformable shapes show variable properties when tools apply a pressure on the object surface. Force compliant industrial robots are used to manipulate deformable objects in real time simulation. In this research, a hierarchical method of motion controlling strategy is presented. The proposed approach is described regarding performance characteristics such as accuracy, repeatability, controllability of the motion segmentation and time dynamics. The result shows that a hierarchical control approach can be considered as a potential candidate to manipulate deformable objects where force requirements are critical.
JMMP
Automated Unsupervised 3D Tool-Path Generation Using Stacked 2D Image Processing Technique
Tuli, Tadele Belay,
and Cesarini, Andrea
Journal of Manufacturing and Materials Processing
Dec
2019
Tool-path, feed-rate, and depth-of-cut of a tool determine the machining time, tool wear, power consumption, and realization costs. Before the commissioning and production, a preliminary phase of failure-mode identification and effect analysis allows for selecting the optimal machining parameters for cutting, which, in turn, reduces machinery faults, production errors and, ultimately, decreases costs. For this, scalable high-precision path generation algorithms requiring a low amount of computation might be advisable. The present work provides such a simplified scalable computationally low-intensive technique for tool-path generation. From a three dimensional (3D) digital model, the presented algorithm extracts multiple two dimensional (2D) layers. Depending on the required resolution, each layer is converted to a spatial image, and an algebraic analytic closed-form solution provides a geometrical tool path in Cartesian coordinates. The produced tool paths are stacked after processing all object layers. Finally, the generated tool path is translated into a machine code using a G-code generator algorithm. The introduced technique was implemented and simulated using MATLAB® pseudocode with a G-code interpreter and a simulator. The results showed that the proposed technique produced an automated unsupervised reliable tool-path-generator algorithm and reduced tool wear and costs, by allowing the selection of the tool depth-of-cut as an input.
2018
ICAST
Mathematical Modeling and Dynamic Simulation of Gantry Robot Using Bond Graph
Tuli, Tadele Belay
In Information and Communication Technology for Development for Africa
2018
This paper presents an initial mathematical modeling and dynamic simulation of gantry robot for the application of printing circuit on board. The classical modeling methods such as Newton-Euler, Kirchoff’s law and Lagrangian fails to unify both electrical and mechanical system models. Here, bond graph approach with robust trajectory planning which uses a blend of quadratic equations on triangular velocity profile is modeled in order to virtually simulate it. In this paper, the algebric mathematical models are developed using maple software. For the sake of simulation, the model is tested on matlab by integrating robot models which are developed by using Solidwork.
2015
M.Sc. Thesis
Tuli, T. B. (2015). Task and path planning of industrial manipulator robot. http://www5.unitn.it/Biblioteca/en/Web/TesiDocente/193537
Finite Element Analysis of Bus Body Structures: Case study at Bishoftu Automotive and Locomotive Industry, Ethiopia Adama, Adama Science and Technology University 2012
Most of Automotive and Locomotive Industries lack the basic scientific methods like computer application to design and manufacture vehicle body structures in order to analyze static and dynamic loads effect. This work presents a finite element model of a city bus for both static and dynamic condition. Their effects are verified by using quarter car and half car multibody models. The basic finite element models are represented by a computer program with a graphic user interface using matlab environment. Designers and manufacturer’s can observe the responses of these models on different condition. This work, therefore introduces the mathematical models using a concept of finite element method to introduce how advanced modeling and analysis tools can be used for design and manufacturing of city bus body structures. Both vehicle industries and academic sectors can use this work as a further reference.