Scientific Program

Conference Series LLC Ltd invites all the participants across the globe to attend 6th World Convention on Robots, Autonomous Vehicles and Deep Learning Singapore.

Day 1 :

Keynote Forum

Juan Pedro Bandera Rubio

Asst.Prof.

Keynote: Why socially assistive robots?

Time : 9:15 - 10:00

Smart Robotics congress 2018 International Conference Keynote Speaker Juan Pedro Bandera Rubio photo
Biography:

Juan Pedro Bandera got his PhD from the University of Malaga in 2010, and since 2017 holds a position as Assistant Professor in the same University. He has been involved in teaching and researching activities in this institution for the last 14 years. His research topics are in the field on social robotics and artificial vision, and more precisely in autonomous gesture recognition, learning by imitation, inner knowledge representations, multimodal interaction and attentional mechanisms. He has published more than 35 papers in international journals and conferences, co-tutored several Thesis in his research fields, and enjoyed research stays in R+D institutions in Spain, Portugal, United Kingdom, Germany and Singapore. He has participated in Spanish and European R+D projects, including the ECHORD++ FP7-ICT-601116 project CLARC, in which a socially assistive robot is employed to help clinicians in Comprehensive Geriatric Assessment (CGA) procedures.

Abstract:

Robots are becoming part of our daily living. The next generation of robots include autonomous cars, context-aware vacuum cleaners, smart house devices, collaborative wheelchairs, etc.  Some of these robots are designed not only to work in daily life environments, but also to engage people around in social interactions, or even collaborate with them in solving different tasks. These social robots face more complex technical challenges regarding perceptual and motor capabilities, cognitive processing and adaptability. They deal with more demanding safety issues. Finally, they also open a delicate ethical dilemma regarding its use. Hence, answering the question why using a social robot? becomes a mandatory prerequisite to use them. This talk addresses this question for a subset of social robots: socially assistive robots. These robots focus on assisting people through social interaction in daily life environments (i.e. houses, nursing homes, etc.). They are part of the technologies for Assisted Living, a key concept in the upcoming silver society. The motivation to use them is based on a set of features: there are many therapies and rehabilitation processes that require social interaction, instead of physical contact. Socially assistive robots can be proactive, looking for people, starting interactions, sharing information, remembering and proposing events or activities. Finally, people are more motivated to interact with physically embodied agents (people, pets, robots) than with screens. All these benefits have driven an important R+D effort involving companies and institutions worldwide, and socially assistive robots are becoming an interesting business opportunity. However, there are still open key questions related to cost, safety, acceptability and usability of socially assistive robots. Moreover, these items have still to be evaluated in long term experiments. This talk details the current advances and future work towards solving these questions in the framework of the ECHORD++ EU project CLARC.

Smart Robotics congress 2018 International Conference Keynote Speaker Boian Mitov photo
Biography:

Boian Mitov is a software developer and founder of Mitov Software http://www.mitov.com , specialized in the areas of Video, Audio, Digital Signal Processing, Data Acquisition, Hardware Control, Industrial Automation, Communications, Computer Vision, Artificial Intelligence, parallel and distributed computing.
He has over 30 years of overall programming experience in large variety of software problems, and is a regular contributor to the Blaise Pascal Magazine http://www.blaisepascal.eu .
He is author of the OpenWire open source technology, the IGDI+ open source library, the VideoLab, SignalLab, AudioLab, PlotLab, InstrumentLab, VisionLab, IntelligenceLab, AnimationLab, LogicLab, CommunicationLab, and ControlLab libraries, OpenWire Studio, Visuino www.visuino.com , and author of the “VCL for Visual C++” technology.

Abstract:

Statement of the Problem: Robotics is becoming increasingly important part of the STEM education. With the increased availability of cheap Arduino controllers, Sensor Modules, actuators and entire robot KITs, the students are able quickly to learn how to build robots, and connect the electronic components. Most students however struggle with programming the micro-controllers due to the complexity of the available programming tools and the tools general unsuitability for easily processing simultaneous tasks. Software development is considerably different than designing and connecting hardware, requiring a complete paradigm shift in understanding of how everything works.
A new approach to programming robots in STEM classes: To solve this problem, a new graphical data-flow, and event driven programming approach to micro-controller, and robot programming has been developed, and an easy to use graphical development environment called Visuino https://www.visuino.com has been introduced. The data-flow, and event driven approach makes programming very similar to the way kids connect sensor modules and actuators to the micro-controller. In Visuino the typical hardware modules such Servos, Stepper Motors, are represented by corresponding software graphical representations, making it easy to understand how the hardware will be controlled.
Once the graphical design is completed, by pressing a button, Visuino generates ready to compile and upload C++ code, making it very quick, and easy to create complex Robot projects in very short time

Keynote Forum

Zhou Xing

Directory of Artificial Intelligence for Autonomous Driving Borgward Automotive Group

Keynote: Predictions of short-term driving intention using recurrent neural network on sequential data

Time : 11:15-11:45

Smart Robotics congress 2018 International Conference Keynote Speaker Zhou Xing photo
Biography:

Dr. Xing holds a Ph.D degree in Particle Physics, with his thesis focusing on large-scale statistical data analysis, utilizing neural network methodologies on various analysis projects at LHCb experiment at CERN. Thereafter Dr. Xing joined Stanford Linear Accelerator Center (SLAC) national laboratory, as Engineering Physicist/ Faculty and Staff, working on data acquisition and analysis. Dr. Xing then joined a leading China EV company, NIO, where he specializes on a few deep-learning driven fields related to applications of autonomous driving, including supervised learning for semantic segmentations / road segmentations, moving object detections, trajectory prediction using both optical camera as well as LIDAR measurements, reinforcement learning of continuous control, policy-based semi-model controlled reinforcement learning methods, etc.
 

Abstract:

Predictions of driver's intentions and their behaviors using the road is of great importance for planning and decision making processes of autonomous driving vehicles. In particular, relatively short-term driving intentions are the fundamental units that constitute more sophisticated driving goals, behaviors, such as overtaking the slow vehicle in front, exit or merge onto a high way, etc. While it is not uncommon that most of the time human driver can rationalize, in advance, various on-road behaviors, intentions, as well as the associated risks, aggressiveness, reciprocity characteristics, etc., such reasoning skills can be challenging and difficult for an autonomous driving system to learn. In this article, we present a disciplined methodology to build and train a predictive drive system, which includes various components such as traffic data, traffic scene generator, simulation and experimentation platform, supervised learning framework for sequential data using recurrent neural network (RNN) approach, validation of the modeling using both quantitative and qualitative methods, etc. In particular, the simulation environment, in which we can parameterize and configure relatively challenging traffic scenes, customize different vehicle physics and controls for various types of vehicles such as cars, SUV, trucks, test and utilize high definition map of the road model in algorithms, generate sensor data out of light detection and ranging (LIDAR), optical wavelength cameras for training deep neural networks, is crucial for driving intention, behavior, collision risk modeling since collecting statistically significant amount of such data as well as experimentation processes in the real world can be extremely time and resource consuming.