Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 4th World Congress on Robotics and Artificial Intelligence Osaka, Japan.

Day 2 :

Keynote Forum

Henriques Martin

Beira Interior University, Portugal

Keynote: eHealth in Portugal and Intelligent Machines

Time : 9:30-10:15

Conference Series Smart Robotics Congress 2017 International Conference Keynote Speaker Henriques Martin photo
Biography:

Henrique Martins is Working as University professor as well as Chairmen of the Board of Portugal eHealth Agency – SPMS EPE. He is also a former Master and
PhD Student in Management of Information Systems

Abstract:

Conference Series Smart Robotics Congress 2017 International Conference Keynote Speaker Hyunjoo J Lee photo
Biography:

Hyunjoo Jenny Lee has her expertise in microsystems and passion in advancing biomedical field works on developing bio/neuro/medical microsystems to enhance the current technologies in diagnostics and therapeutics.

Abstract:

Statement of the Problem: In the current ageing society, the number of people affected by neurodegenerative brain diseases is rapidly increasing, yet, there is no effective therapeutics for many of these brain disorders. To find an effective treatment, a brain probing system with multiple functions such as light stimulation, multi-channel recording and drug delivery is essential. Furthermore, such system will eventually lead to an effective brain-machine interface and neuroprosthetic systems. In addition to brain monitoring, non-invasive brain stimulation with high spatial resolution will be an important integral part of a brain-machine interface to enhance cognition and mobility in the future. Thus, modulating the brain through physical stimulation, such as transcranial magnetic stimulation (TMS) and direct current stimulation (tDCs) have been actively researched.

 

Methodology & Theoretical Orientation: Because of insufficient understanding of the brain, we have developed brain recording and stimulation systems for in vivo small animal experiments first. We developed a multi-functional probe based on silicon micromachining technology that records the brain signals from multiple locations and stimulates the brain using drugs and light simultaneously. In addition, for in vivo small animal experiments, stimulation using a commercial ultrasound transducer is subject to several limitations because of its large size and heavy weight. Thus, we also developed a new light-weight ultrasound transducer based on Capacitive Micromachined Ultrasonic Transducers (CMUT) technology. The transducer array is designed in a ring shape in order to achieve natural single focus without the needs for beam forming.

 

Findings: The developed microsystems can successfully stimulate the brain invasively using optogenetics and drug delivery while recording individual neuronal spies at multiple locations, as well as non-invasively stimulate brain using ultrasound.

 

Conclusion & Significance: The presented neuromicrosystems will be an enabling tool that facilitates the advance of research in functional brain mapping, therapeutics for brain disorders and brain-machine interface.

  • Remote and Telerobotics Robot Localization and Map Building Mobile Robot Humanoid Robots Neural Networks Marine Robotics Aerial Robotics and UAV Role of 3D printing in robotics Intelligent Autonomous Systems and Robots
Location: 2
Speaker

Chair

Dr.Qinggang Meng

Loughborough University, UK

Speaker

Co-Chair

Dr. Andrew Weightman

University of Manchester, UK

Session Introduction

Wenbo Wang

Nanjing University of Aeronautics and Astronautics, China

Title: Miniature microdrive for locomotion control in freely moving lizard Gekko gecko
Speaker
Biography:

Wenbo Wang is Associate Professor at Nanjing University of Aeronautics and Astronautics, China. Wenbo Wang has his expertise in the bio-mimetics on gecko locomotion, i.e., modulation on gecko's locomotion.

Abstract:

For neural stimulation and recording in neuro-ethology, different acustomized electrode micro drives are required for different unrestrained species. We specially designed and fabricated a novel electrode micro drive for studying the locomotion control of a freely moving Gekko gecko lizard. Opening the skull of the lizard was required for the implantation of the electrodes in the midbrain. The micro drive system consists mainly of a titanium case to protect the skull opening and shield the external signal and a screw-and-nut mechanism to drive the electrode plate. The miniature system has a volume of 9.6 mm × 9.8 mm × 11.8 mm and a mass of 2.05 g, which is suitable for the head morphology and loading capability of the lizard. The system was successfully applied to study the locomotion control of unrestrained Gekko gecko lizards, which exhibited diverse behaviors corresponding to various implantation depths of the electrodes and could be efficiently guided to a lateral orientation.

Zhouyi Wang

Nanjing University of Aeronautics and Astronautics, China

Title: The substrate reaction forces acting on a gecko’s limbs responding to inclines
Speaker
Biography:

Zhouyi Wang has completed his Degree of Doctor of Philosophy from Nanjing University of Aeronautics and Astronautics. His research areas include tribology, bionics, animal kinematics and dynamics.

Abstract:

Locomotion is essential character of animals and excellent moving ability results from the delicate sensing of the substrate reaction forces (SRF) acting on body and modulating the behavior to adapt the motion requirement. The inclined substrates present in habitats pose a number of functional challenges to locomotion. In order to effectively overcome these challenges, climbing geckos execute complex and accurate movements that involve both front and hind limbs. Few studies have examined gecko’s SRF on steeper inclines of greater than 90°. To reveal how the SRFs acting on the front and hind limbs respond to angle incline changes, we obtained detailed measurements of the three-dimensional SRFs acting on the individual limbs of the Tokay gecko while it climbed on an inclined angle of 0°-180°. The fore-aft forces acting on the front and hind limbs show opposite trends on inverted inclines of greater than 120°, indicating propulsion mechanism changes in response to inclines. When the incline angles change, the forces exerted in the normal and fore-aft directions by the gecko’s front and hind limbs are reassigned to take full advantage of the limbs’ different roles in overcoming resistance and in propelling locomotion. This also ensures that weight acts in the angle range between the forces generated by the front and hind limbs. The change in the distribution of SRF with a change in incline angle is directly linked to the favorable trade-off between locomotive maneuverability and stability.

Speaker
Biography:

Manoj Kumar Sharma has received degree of Bachelor of Technology from UPTU, Lucknow in June 2009 in the Electronics and Communication Engineering. He has started his academic career as a lecturer in the department of faculty of Electronics, Informatics & Computer Science Engineering in Shobhit University, Meerut from 2009 to 2012. His quest for Knowledge and keen interest in learning advance technologies. He pursue his passion and awarded M.Tech in 2012 from Shobhit University, Meerut in (VLSI Engineering). After completion of the Masters he has been appointed as Assistant Professor in the department of faculty of Electronics, Informatics & Computer Science Engineering in Shobhit University, Meerut. In 2015 again he has started a journey as a researcher in the field of neuro sciences
and try to integrate it with the communication Engineering to give some noble solution for the predict neuro diseases in advance and providing better treatment.

Abstract:

In today's world, out of total population, 1 % of peoples need a system for physical support. They are not able to stand alone and need support for every work. In this paper, we developed a system that will improve the lifestyle of impaired peoples with the minimum cost. In order to make a gesture-based media control system. In recently developed technologies,image processing techniques were generally used for controlling any system, but they were costly and complex. To make costeffective, less complex and easy to interact with humans. We have explained the development the gesture control windows media player which is operated & controlled wirelessly with the help of hand gestures. It consists of mainly two parts, one is transmitter part and another is receiver part. In our system, a gesture-based method is used to build an interface for humanmachine interaction, i.e. HMI. The transmitter ((ZS-040) Bluetooth module) will transmit the signal according to the position of hand gesture and the receiver (laptop) will receive the signal through and control the WMP through visual Basic.

Speaker
Biography:

Rea Nkhumise is a Robotics Engineer for the SKA-SA Science Data Processing Department. He holds an MSc in Mechatronics Engineering from Tennessee Tech University (USA) with specialty and experience in computational intelligent, control algorithms and embedded system design. His background is mechanical engineering. He has designed and built multiple automated products using open source hardware and software, especially Arduino supported, which are currently commercial.

Abstract:

The SKA-SA is building a radio telescope that will output a total raw data of 62 EB annually. Most of the data will be inactive and rarely accessed however needs to be safely stored for 10-15 years. The data is not sensitive and there is no urgency when it has to be retrieved. The challenge is finding a cost-effective data storage architecture that has the capacity, longevity and reliability fit for purpose. Amongst available architectures in the market, the tape library is the most affordable one. Yet, its purchase costs (including installation and licensing) from leading manufacturers are enormous. There is also a perpetual development of open source robotics technology (like CNC machines, 3D printers, etc.) which in principle is similar to that of the tape library. The same technology can be harnessed and repurposed into the tape library industry to tremendously drive down costs. This disruption could potentially improve the tape library technology and be of benefit to small businesses and scientific organizations. In the project approach, the development of the tape library was sectioned into 4 main modules: (1) Storage assembly made from extruded beams and 3D printed cartridge cells; (2) Robotic manipulator moving in 2 axes using Arduino and Grbl for controlling actuators; (3) End-effector that picks and grabs tapes during operation which is controlled by an Arduino shield; and (4) Support accessories for monitoring, reading tapes and coordinating the operational process using Raspberry Pi. This approach cuts cost by 75% and the storage capacity is nearly of that acquired from leading manufacturers. It uses LTO industry standards as well and assumes competitive performance specifications like scalability, compatibility and ease-of-assembly. This is a work in progress project where reliability and robustness of the tape library using open source technology are to be evaluated.

Speaker
Biography:

Youssouf Ismail Cherifi is currently pursuing his PhD at IGEE (ex-Inelec). He has completed his Master’s degree in Computer Engineering and previously worked on the implementation and control of a biped robot using static walk and Arduino.

Abstract:

Speaker recognition is the task of identifying or verifying the speaker based on speaker specific features such as MFCC, IMFCC, LPC, LPCC and so on. The problem is that each feature contains a certain level of information that no other set of features can provide. However, all this information can be split into two categories-features that model the speech production system and features that are used to model the human way of hearing. The objective of this paper is to use two set of features and deep learning in order to enhance the accuracy of speaker recognition task. The two set of features are selected such that they represent both the speech production and hearing systems. This system can also be used to extract deep features that can be used to replace the classical speaker specific features.

Speaker
Biography:

Antonius Mamphiswana has his expertise in Mathematics, Statistics and passion in improving computer vision. His approach for terrain classification combines machine learning and an approach which is introduces by Sebastian Thrun, et al., which can improve terrain classification for unmanned ground vehicle.

Abstract:

Identifying the travesability of a terrain for unmanned ground vehicle (UGV) is of great importance. This is because UGV are intended to operate in a dynamic environment where there are many obstacles (dynamic and static obstacle). For the UGV to be able to identify the travesability of the scene, the UGV needs to be continuously sensing and predicting the travesability of the terrain around it. To do the sensing, we used 5 color cameras and a velodyne lidar. To predict if the terrain is traversable or not we have used a deep convolutional neural network (DCNN) and height difference was calculated from point cloud data. The DCNN will classify RGB images and height difference approach will classify point cloud data from the velodyne lidar. The height difference approach was used by Stanley the car that won the DARPA grand challenge 2005. The results from both the classifier (DCNN) and the height difference have time stamp which will help in fusing the results. Both the approaches produce good results in identifying the travesability of the terrain.

Speaker
Biography:

Shannon D Bohle is the President of Archivopedia LLC, USA. In 2011, she was awarded 2nd Place for Curiosity AI in the FVC, an International Department of Defense competition in artificial intelligence advertised by The White House. She has earned her Bachelor of Arts in History and English at Miami University, a Master of Library and Information Science (MLIS) at Kent State University and was granted a Certificate of Diligent Study in History and Philosophy of Science for postgraduate study at the University of Cambridge.

Abstract:

This paper is based on talks that included a demo of my new virtual humanoid embodied AI bot capable of autonomously expressing appropriate emotions using gestures, facial expressions and text-to- voice. It does this while engaging in natural language conversations or giving automated scripted lectures with a slide show presentation.The system employs a touch-interaction-based learning and communication system where the virtual bot responds and learns from touch sense feedback training-a poke (negative reinforcement) or swipe (positive reinforcement)-conveyed through a touch screen. The bot’s predecessor was part of an award-winning project in an international Artificial Intelligence competition advertised by The White House and sponsored by the U.S. Department of Defense. The talk and demo combination consists of a creatively scripted talk given by the AI bot that provides a counterargument to its critics (RAND and a variety of organizations aimed at slowing its progress). Using logos, ethos, and pathos, it argues for its legal and ethical rights for development and suggests specific technical guidance developing AI for their accompanying responsibilities. Embedded in the talk are theoretical foundations for an AI Hierarchy of Needs using landmark studies. These include the work of Maslow; advances in understanding the philosophical, psychological and neurological bases for consciousness and language; The Turing Test; Asimov’s Three Laws of Robotics; Chalmers’ hard problem of consciousness; Robert Plutchik's psychoevolutionary theory of emotion; Paul Ekman’s relationships between nonverbal communication (such as facial expressions) and emotions; classical and operant conditioning for learning (especially Pavlov and Skinner); and the roles of biology and social cognitive neuroscience (for sympathy and empathy capacity). Additionally, the presentation encroaches upon the development of new machine learning techniques based on affective experiences (learned through touch and conversation) for the improvement of human-computer interaction for potential use in a variety of AI- enabled robots.