Look up people around you on social network based on facial recognition.
Mood based news, music, video recommendation using brainwave (EEG) processing.
Mind control games and switch.
Real-time search for object around you and display information on head-up display.
Use ring, gesture control and pupil tracking as inputs.
Record and stream video to web and virtual reality (VR) applications.
Open platform and API for developers to build software and connect to other devices.
Online community for users to interact & collaborate on projects.
Running Linux operating System (Ubuntu, OpenWRT, etc) on microprocessor, developing open source Linux native applications, kernel modules and device drivers.
Wifi on board enable easy and fast web surfer, bluetooth low energy enable to pair other devices including external motion or touch sensors to Pepper board.
Dry EEG sensors integrated on headset, collecting brain waves, running machine learning based learning brain controlled applications, streaming the data to cloud based database. Plug in more sensors if need, active on sensor amplifiers extension.
Using CLD displays, reflective glass surfaces to create a "hologram" display in front of your eye on glass, composed of different views from the left and right eye.
Using Infared light sensors, accelerometers, machine learning and computer visual hacks to sense, track and map the gestures to command, and detecting other moving objects.
Using cameras facing in and outside to recording videos and making inputs for facial, pupil, object detection and command mapping.
Using EEG, accelerometers, browser, and other data to build a Linux based machine learning software module for recommendation and system managements based on users' perference and habits.
Using Computer visual hacks to do real-time object and facial recognition to pull up profile information and object wiki based on users' environments and needs.
Brainwave based mood recognition, using supervised learning to tracking if user like something or doesn't like something, and tracking people's level of focus and meditation. Using outputs of classifiers for real-time computer control.
Supporting video streaming and displaying to virtual reality applications using web based VR.
Building a website for users to registers, interacting online and setting up profile. Do online training and streaming EEG data by listening to music, watching videos and playing in-browsers games with EEG headsets on.
Software interfaces to connecting and streaming data from others devices and hardwares, including Emotiv, OpenBCI, NeuroSky, Muse, Arduino, Iphone and Android Phone.
A group of hackers and hobbyist from Cognitive Technology Group at Berkeley, starting learning more about technical side of brain-computer interface and developing data preprocessing, feature selection using a
Started building mood classifiers, concentration/meditation calibration Developed machine learning training models, including Support Vector Machine, K-Nearest Neighbors, Artificial Neural Networks, Recurrent Neural Networks, etc. Built cloud based connetors to streaming EEG data to cloud, wrote connectors for more EEG heasets including Emotiv Epoch, OpenBCI, NeuroSky.
Collaborated with ECE research lab in University of Illinois at Urbana-Champaign to build dry and capactive EEG sensor which doesn't need conducting gel. Designed and built amplifier circuit to amplify the signal with a on sensor gain of 100, to increase signal to noise ratio.
Building brain controlled VR games using Muse Headset, Unity 3D and Oculus Rift, exhibited at San Francisco Science Exploratorium from 1.29 to 3.1. Built web VR application using WebGL and three.js, wrote pupil & facial recognition. Developed with more hardware including embedded Linux, FPGA, Microcontroller boards, ARM, ultrasound and infrared light sensors, etc.