Project 1: Mixed-reality massively multi-player Urban SimulaTion

Project description:

Cities are essentially a complex system of interconnected sub-systems of infrastructure, transportation, housing, education, healthcare, energy and so on. Understanding and evaluating the dynamics of such a complex system with so many equally complex sub-systems interacting with each other is extremely difficult.

This research aims to build a simulation-based approach to the above problem using state-of-the-art technologies in open/big data analytics, cloud computing, mixed reality (Virtual Reality/Augmented Reality) and gaming. The simulation will model a large scale of entities (servers, people, buildings, vehicles etc.) as well as their massive interactions. Data feeds collected from open data portals and from other smart city deployments will provide crucial inputs for simulation dynamics.  Mixed reality with massively multi-player online role-playing gaming technologies will provide a transformative experience of human interaction with the simulation infrastructure. The resulting data-driven strategic decision making platform will be applicable not only for deriving insights on sustainable urbanization, but also for a wide range of other complex decision making environments from local and global business strategy for firms to international geo-political policy analysis for nations.

At the current phase, we are working on an augmented reality simulation of Manhattan to understand the dynamics of the urban environment. It uses Unity3D to visualize the entities, Improbable’s SpatialOS for cloud-based simulation support, and the Microsoft HoloLens to allow interactivity in augmented reality. We’ve constructed a preliminary Citi Bike sharing modeling system with bike stations and trip routes based on realistic data and the incorporation of Manhattan 3D model. We’ve also explored related funcationalities of HoloLens for the simulation.

Open tasks for the next step:

  • Scale the Manhattan 3D model in Unity
  • Scale the Bike-sharing data processing within the system
  • Model enhancement for the stations,  routes and other entities
  • HoloLens integration with the platform

Background references:

  1. Disrupting cities through technology
  2. Vast Internet failure simulator created with UK Government
  3. Microsoft Hololens

Candidate requirements:

This is a very hands-on research project. The candidate must be extremely self-motivated, capable of doing independent study, comfortable with exploring the latest technologies in a host of inter-disciplinary areas.

Knowledge or experience in one or more of the following areas is required: 3D Modeling (Unity3D, SketchUp, Blender), AR/VR (Microsoft HoloLens/windows 10, Facebook Oculus, HTC VIVE), Geographic Information System, Building Information Modeling, computer networks, open/big data analytics, Android/iOS/Windows10/Linux, Java, C#, JavaScript etc.

How to apply:

Interested candidates may send resume/cover letter to Dr. Charles Shen, shortlisted candidates will be contacted for interview.

Project 2: Virtual Reality Tele-presence Robot System

Project description:

The goal of this project is to build an affordable and intelligent robot with virtual reality tele-presence capability based on consumer hardware. In an earlier phase of this project we have created a basic autonomous robot navigation system that is capable of understanding its position in the world and discovering new places. The robot system is also able to perform 3D scanning and produce environment models with Google Tango. In addition, we have tested a basic virtual reality teleconferencing prototype using Google Cardboard, which allows a user to remotely controls a camera system that provides an immersive stereoscopic view in real-time.

VR tele-presence device

This device enables real-time all-direction 3D view of any remote location directly to your VR-device such as google cardboard. The setup involves a remote 3-axis-motor setup with stereo-camera connected to a Remote-laptop. On the other side, there is a VR-device setup. There are two important communication channels: the first is from Remote-laptop to VR-device: Streaming of the stereo-video which is viewed by the user in his VR-device. The second is from VR-device to Remote-laptop: Sending real-time head-tracking data to control the remote-motors and give all direction view.

Choice of platform/framework: Python, VLC, OpenCV, and other open source tools that have cross-platform support (Win/Mac,Linux)

Open tasks:

  1. Motor control in Python on the remote-laptop
  2. Video stitch in Python (and OpenCV or other libraries) on the remote-laptop
  3. Video stream in Python using VLC from the remote-laptop
  4. Webcam calibration to perceive 3D on the VR-device
  5. Performance tests and optimizations to maintain quality live video streaming.
  6. Replacing the remote-laptop with a MCU like RPi or Intel-Edison etc. (i.e. Make a portable setup).

Autonomous Robot Navigation

We have a prototype consisting of a iRobot Create2 with 3D sensors and a netbook controller. The robot runs ROS Indigo packages which enable it to autonomously navigate physical spaces.

Open tasks:

  1. Implement a convenient user interface to command the robot to move to various points
  2. Integrate servo system with the robot
  3. Improve on the accuracy of the navigation

Background references:

  1. Google Tango
  2. Robot Operating System
  3. iCreate Programmable Robot

Candidate requirements:

This is a very hands-on research project. The candidate must be extremely self-motivated, capable of doing independent study, comfortable with exploring the latest technologies in a host of inter-disciplinary areas.

Knowledge or experience in one or more of the following areas is required: Robot Operating System (ROS), Google Tango, Kinect, iRobot Create, Unity3D, Google Cardboard and other AR/VR kits, Android/iOS/Windows10, C#, Java, JavaScript, Python, networking etc.

How to apply:

Interested candidates may send resume/cover letter to Dr. Charles Shen, shortlisted candidates will be contacted for interview.

Project 3: Understanding the impact of open data on social well being

Project description:

Social Progress Index (SPI) is a multi-dimensional measure of human wellbeing. Basic Human Needs, Foundations of Wellbeing and Opportunity are the three main pillars of SPI. Each category is further divided into four components. Furthermore, the components themselves comprise of a diverse range of indicators. Open Data Barometer (ODB) , on the other hand, is a measure for open government data, a global movement to make government “open by default” – promising to make public sector data openly available, without charge and in re-useable formats. The objective of opening the government data is: to secure government accountability; to coordinate action to improve society; and bootstrapping new business ideas that can benefit from access to government data. For ODB, the sub-indexes include: readiness, implementation and impact. Keeping in view the goals of ODB such as accountability, social policy, political, economic and social impacts etc., this research intends to contrast sub-indexes of ODB with sub-dimensions of SPI.

This research will focus on the implementation and impact sub-indexes of ODB. Using the sub-dimensions of the SPI, this research will attempt to realize the impact of open data on basic human needs, health, environment, education, information access and corruption.

Background references:

  1. Open Data Barometer
  2. Social Progress Index

Candidate requirements:

The candidate must be extremely self-motivated, capable of doing independent study.

The candidate needs to have knowledge of Open Data and good command of statistical tools as well as interpretation of data and results.

How to apply:

The openings for this project have been filled.