Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th World Convention on Robots, Autonomous Vehicles and Deep Learning Singapore.

Day 2 :

Keynote Forum

Soo yeung lee

Korea Advanced Institute for Science and Technology, Republic of Korea

Keynote: Digital Companion with Human-like Emotion and Ethics

Time : 09:30-10:15

Conference Series Smart Robotics congress 2018 International Conference Keynote Speaker Soo yeung lee photo
Biography:

Soo-Young Lee has worked on the artificial cognitive systems with human-like intelligent behavior based on the biological brain information processing. His research includes speech and image recognition, natural language processing, situation awareness, top-down attention, internal-state recognition and human-like dialog systems. Especially, among many internal states, he is interested in emotion, sympathy, trust and personality. He led Korean Brain Neuroinformatics Research Program from 1998 to 2008 with dual goals, i.e., understanding brain information processing mechanism and developing intelligent machine based on the algorithm. He is currently the Director of KAIST Institute for Artificial Intelligence and leading Emotional Conversational Agent Project and a Korean National Flagship AI Project. He was the President of Asia-Pacific Neural Network Society in 2017 and had received Presidential Award from INNS and Outstanding Achievement Award from APNNA.

 

Abstract:

For the successful interaction between human and digital companion, i.e., machine agents, the digital companions need be able to bind with human-like ethics as well as to make emotional dialogue, understand human emotion and express its own emotion. In this talk we present our approaches to develop human-like ethical and emotional conversational agents as a part of the Korean Flagship AI Program. The emotion of human users is estimated from text, audio and visual face expression during verbal conversation and the emotion of intelligent agents is expressed by speech and facial expression. Specifically, we will show how our ensemble networks won the Emotion Recognition in the Wild (EmotiW2015) challenge with 61.6% accuracy to recognize seven emotions from facial expression. Then, a multimodal classifier combines text, voice and facial video for better accuracy. Also, a deep learning based Text-to-Speech (TTS) system will be introduced to express emotions. These emotions of human users and agents interacts each other during the dialogue. Our conversational agents have chitchat and Question-and-Answer (Q&A) modes and the agents respond differently for different emotional states in chitchat mode. Then, the internal states will be further extended into trustworthiness, implicit intention and personality. Also, we will discuss how the agents may learn human-like ethics during the human-machine interactions.

Keynote Forum

Mingcong Deng

Tokyo University of Agriculture and Technology JAPAN

Keynote: Operator-based nonlinear control of micro hand and its application

Time : 10:15 -11:00

Conference Series Smart Robotics congress 2018 International Conference Keynote Speaker Mingcong Deng photo
Biography:

Prof. Mingcong Deng is a Professor of Tokyo University of Agriculture and Technology, Japan. He received his PhD in Systems Science from Kumamoto University, Japan, in 1997. From 1997.04 to 2010.09, he was with Kumamoto University; University of Exeter, UK; NTT Communication Science Laboratories; Okayama University. Prof. Deng is a member of SICE, ISCIE, IEICE, JSME, IEEJ and the IEEE(SM). He specializes in three complementary areas: Operator based nonlinear fault detection and fault tolerant control system design; System design on thermoelectric conversion elements; Applications on smart material actuators. Prof. Deng has over 460 publications including 158 journal papers, 15 books (or chapters), in peer reviewed journals including IEEE Transactions, IEEE Press (for books) and other top tier outlets. He serves as a chief editor for International Journal of Advanced Mechatronic Systems, The Global Journal of Technology and Optimization, and associate editors of 6 international journals, including with IEEE journal. Prof. Deng is a co-chair of agricultural robotics and automation technical committee, IEEE Robotics and Automation Society; also a chair of the environmental sensing, networking, and decision making technical committee, IEEE SMC Society. He was the recipient of 2014 Meritorious Services Award of IEEE SMC Society.

Abstract:

Soft actuators have been getting increased attention with developing of medical fields etc. A miniature pneumatic bending rubber actuator is one of the soft actuators. The actuator has the bellows shape and are made of silicone rubbers. Due to the bellows shape, the actuator can do two-way large bending by supplying positive or negative air-pressure. However, to control the actuator and make its model accurately are difficult because the actuator has nonlinearity. Moreover, the actuator should be controlled without sensor because its expected application are medical fields, especially, in operations. On the other hand, a control system based on operator theory can apply nonlinear systems with uncertainties. The relationship between operator theory and passivity or adaptive control which is an important idea in control engineering has discussed by some researchers. Meanwhile, support vector regression (SVR) has been utilized for classification and regression analysis, where the design parameters are selected by using particle swarm optimization (PSO). Therefore, operator-based control system is discussed. In order to realize sensorless control, PSO-SVR-based moving estimation with generalized Gaussian distribution (GGD) kernel is employed. That is, operator-based sensorless adaptive nonlinear control system considering passivity for the actuator and PSO-SVR-based moving estimation with GGD kernel are shown. Finally, some simulations and experimental results are introduced.

Keynote Forum

Qingsong Xu

University of Macau, China

Keynote: Design and Application of Force-Sensing Robotic Bio-Micromanipulation Systems

Time : 11:15-12:00

Conference Series Smart Robotics congress 2018 International Conference Keynote Speaker Qingsong Xu photo
Biography:

Qingsong Xu is the Director of Smart and Micro/Nano Systems Laboratory and Associate Professor of Electromechanical Engineering at the University of Macau. He was a Visiting Scholar at the University of California, Los Angeles (UCLA), USA in 2016, the RMIT University, Melbourne, Australia in 2016, the National University of Singapore, Singapore in 2012 and the Swiss Federal Institute of Technology (ETH Zurich), Switzerland in 2011. His current research area involves micro/nano-mechatronics and systems, control and automation and applications of computational intelligence. He is a Senior Member of IEEE and a Technical Editor of IEEE/ASME Transactions on Mechatronics. He has published three monographs and over 240 technical papers in international journals and conferences.

Abstract:

Robotic micromanipulation systems are demanding devices to realize automated manipulation of biological samples. Majority of existing robotic bio-micromanipulation systems work based on displacement sensing and control. The lack of force sensing prevents the wide application of the devices. In modern biotech industry, there are increasing needs for advanced micromanipulation equipment with microforce sensing and control capabilities. The development of force-sensing microinjectors and microgrippers enable extensive applications involving biological field with guaranteed safety and accuracy of advanced robotic manipulation. This talk reports our recent work on design and development of new force-sensing robotic micromanipulation systems dedicated to biological micromanipulation tasks. In particular, force-sensing microinjector and force-sensing microgripper are presented as typical examples. New microforce sensor design is conducted in details. Novel control schemes have been developed to fuse the position and force control to enable a safe and reliable manipulation. The effectiveness of the systems has been demonstrated by carrying out microinjection and microgripping operation of biological cells. The developed force-sensing robotic bio-micromanipulation systems have demonstrated wide applications in the fields of biomedical engineering, gene engineering and so on.

Keynote Forum

Manchita Dumlao

The Philippine Women’s University, Philippines

Keynote: Central emergency response management system

Time : 12:00-12:45

Conference Series Smart Robotics congress 2018 International Conference Keynote Speaker Manchita Dumlao photo
Biography:

Menchita F Dumlao is currently the Research Director of Philippine Women’s University, Philippines. Concurrently, she is an Associate Professor and Program Chair of Department of Information Technology and Technology Consultant of Imergex Information Technology, Inc. Her research interest is in the area of data science, machine learning and artificial intelligence.

 

 

Abstract:

Central Emergency Response Management system (CERM) facilitates estimation of dispatch response time and route or direction from the origin of the resource to incident site. The dispatch determined the capability required to a specific incident. Its core technology is Call Taking Management System (CTMS) which applies stochastic optimization to collaborate sub-systems and make everything else complete. They are connected end to end to create a consolidated database system leading to a criminal information system and incident command system which includes call taking and logging module, resource unit dispatch and geo-mapping module, SMS messaging services module, resource availability and dispatch. It can record audio patch from the telephone line through the audio input of the PC (client computer). The provided information on incident, incident type, location and other important information needed by the dispatcher (police). The map provides an aerial view (satellite image) of the location of an incident for the nearest equivalent search parameter within 10,000 meters from the center of crime. This technology shows the best route possible from the unit resource (of the crime) origin to the incident site. The control also shows text-based instruction, estimated time and distance.

 

  • Speaker Session: Robotics | Artificial Intelligence | Medical Robotics | Industrial Robot Automation | Humanoid Robots | Human-Robot Interaction | Deep Learning
Location: Polaris II
Speaker

Chair

Emdad Khan

Internet Speech, USA

Session Introduction

Rattapon Thuangtong

Mahidol University, Thailand

Title: Robotic hair transplantation

Time : 13:45-14:15

Biography:

Rattapon Thuangtong is 2nd year PhD student in Biomedical Engineering, Faculty of Engineering, Mahidol University, Bangkok, Thailand and he is expertise in Hair transplantation-Robotic, FUT, FUE, Synthetic hair transplantation and he is professional doctor from Siriraj Hospital, Thailand

 

Abstract:

Follicular unit transplantation was classified into two techniques: (1) Strip harvesting follicular unit transplantation and (2) Follicular Unit Extraction (FUE). Artas robotic hair transplantation is the newly developed machine that use robotic arm to operate FUE. By using skin tensioner and photo sensor, robot machine can perceive the direction of each hair follicle within that frame and can process the robotic arm with two sets of punch (sharp punch and dull punch) perform FUE precisely. Artas robotic hair transplantation can improve FUE by increasing the speed and accuracy of harvesting. After using Artas robotic hair transplantation, for a few years, Artas function very well within the central area of the occiput that we called sweet spot. We found that there are some limitations of Artas robotic hair transplantation such as: (1) Limitation in the temporoparietal area of the scalp, (2) Limitation for the lower occipital area of the scalp (3) Limitation for female and senile patient, and (4) Artas has some limitation for previous surgery patients. We try to improve the result by: (1) Selecting the proper patient, not too old because of fragile tissue, (2) Perform tumescent solution injection as much as possible to decrease the would size and decrease the transection, (3) Upgrade the hardware and software of the Artas into 3,000 RPM version, (4) Combine using handheld motorized FUE machine to perform FUE in the temporoparietal and lower occipital, (5) Extract the hair follicles during on skin tensioner, and (6) Using team working to maximize the speed. Finally, robotic hair transplantation helps surgeon to perform FUE quicker and more accuracy. It is the important milestone on hair transplantation, but it needs more improvement for the best outcome of the patients.

 

Martin Heide Jørgensen

The Maersk Mc-Kinney Moller Institute-University of Southern Denmark, Denmark

Title: Industry 4.0, robotics and automation in the production environment: Future trends and challenges in product design

Time : 14:15-14:45

Biography:

Martin Heide Jørgensen is a Program Coordinator I4.0 since 2017 at University of Southern Denmark, The Maersk Mc-Kinney Moller Institute. He had completed his PhD from Aalborg University in the area of Fracture Mechanics. Later, he has contested different jobs, all in the area of public research and education, including a period of 7 years as Head of Department at Aalborg University. Within the last 3-4 years, his research area has been digitalization and I4.0.

 

Abstract:

Together with the digitalization, the frames for product design are changing substantial. Tools for simulation and digital twin representations now can be connected directly with the CAD systems. This means that the optimization of the product or the mechanical solution now can be handled in a multi objective system involving performance, reliability, production technologies and value chain analysis. A very important feature is the possibility of including systematic performance studies of the use of the product or mechanical solution. This can be done still by empirically means, but due to the digitalization, also using systematic sensor input. This enables the possibility of a more user-oriented perspective in the design, but also that the design can be defined in a more open manner, where the customer by digital tools can define some free elements and functionalities within a given frame of design freedom. To realize these new possibilities a more flexible and agile production system is required. In this sense the number of robot, flexible production units and new digitalized technologies are introduced in the production environments. This leads off cause to a higher degree of flexibility, but also a need for planning and control to obtain an economic feasible productivity. For many industrial CEO’s there is a big challenge in finding the right strategy and track in the world of digitalization to balance the cost and productivity with the ability to act agile and in harmony when new marked possibilities occur. The business strategy and strategy for optimizing of products and production setup is getting more complex and specific for the individual company. The challenge is to find some generic tools or methods to support this development.

 

Aleksei Yuzhakov

Promobot LLC, Russia

Title: Promobot: Autonomous service robot for business

Time : 14:45-15:15

Biography:

Alexey Yuzhakov has completed his PhD at the Pern National Research University. He is the CEO and Chairman of the board of directors of Promobot, a service robotics company.

 

 

Abstract:

Promobot is an autonomous service robot for your business. It is designed to work in crowded spaces, where, by moving autonomously, it helps people with navigation, communicates and answers any questions, shows promotional materials and remembers everyone with whom it interacted. Promobot attracts the maximum audience to the company’s products, as well as takes the people out of the process since it works autonomously. Today, several hundred of Promobot robots work in 26 countries, almost on every continent. They work as administrators, promoters, hosts, museum guides in such companies as NPF Sberbank, Beeline, Museum of Contemporary History of Russia, the Moscow Metro, and are able to increase financial performance of companies, quality of service and customer loyalty. Combination of the technologies of speech recognition, face recognition, speech synthesis, integrated linguistic database and ability to look up the information on the Internet allowed us to form a unique experience for the customers. When they come up to Promobot they see not only a voice assistant, ATM machine, toy or a terminal, they see a robot and they have to communicate with it like with a robot. And the robot has to communicate with them, answering their questions, helping them, showing them the way, giving them a pass, etc. Promobot is not a part of something fancy which you only see on Youtube, it is a part of our life and it is time to face it.

 

  • Workshop
Location: Polaris II
Speaker

Chair

Emdad Khan

Internet Speech, USA

Session Introduction

Caner Sahin

Imperial College London, UK

Title: Category-level 6D object pose recovery in depth images

Time : 15:15-16:00

Biography:

Caner Sahin is a PhD student in Imperial Computer Vision and Learning Lab at the Department of Electrical and Electronic Engineering of Imperial College, London. His PhD research is based on computer vision and machine learning. Particularly, he is working on object recognition, detection and 6D pose estimation.

 

 

Abstract:

Intra-class variations, distribution shifts among source and target domains are the major challenges of category level tasks. In this study, we address category level full 6D object pose estimation in the context of depth modality, introducing a novel part-based architecture that can tackle the above mentioned challenges. Our architecture particularly adapts the distribution shifts arising from shape discrepancies and naturally removes the variations of texture, illumination, pose, etc. so we call it as Intrinsic Structure Adaptor (ISA). We engineer ISA based on the innovations: (1) Semantically Selected Centers (SSC) are proposed in order to define the 6D pose at the level of categories, (2) 3D skeleton structures, which we derive as shape-invariant features are used to represent the parts extracted from the instances of given categories and privileged one-class learning is employed based on these parts, (3) graph matching is performed during training in such a way that the adaptation/generalization capability of the proposed architecture is improved across unseen instances. Experiments validate the promising performance of the proposed architecture.

 

Suranjana Trivedy

GATE IIT Training Institute, India

Title: Companion Bot

Time : 16:15-17:00

Biography:

Suranjana Trivedy is currently a Faculty in GATE IIT Training Institute, India. She was Research Scholar in IIIT Hyderabad. Her current research interests are in the area of artificial intelligence, building intelligent system, robotics, autonomous vehicle and UAV.

 

Abstract:

The aim of the project is to build an intelligent naturally dialog-able machine, which can talk with the depressed and lonely older people and can finally become their true companion. Specifically, the research goal was to build an automatic system to understand multimodal emotion, including facial expressions, speech tone and the linguistic emotions. For that purpose we used semi-supervised learning methods, standard Natural Language Processing (NLP) (uni-gram) and Information Retrieval (IR) Term-Frequency Inverse-Document-Frequency (TF-IDF) techniques to analyze emotion from text. We have extracted emotion from facial images. Overall aim is to create intelligent companion bot.