The user interface team began with a blank slate, and our initial brainstorming was all over the map. The first interface we designed was very simple, using only the keyboard and the arrow keys. This was not intuitive, and had limited potential. Next we transitioned to an ipad interface, and finally decided on an interface using only gesture control and facial recognition. This was ideal because it could all be done without touching a button, and the only additional equipment required was a leap motion. It was possible to control every feature with this simple interface. If one pointed with a finger at their screen, the leap motion would track the vector and point the camera. Depending on how many fingers were pointed, different features such as the laser or robotic arm can be used as well. Facial recognition allowed leds mounted in the robot to display the operators mood. The operators eyebrow height governs the robots led color. In case the user did not have a leap motion or a webcam, every feature could also be controled with a mouse pad joystick and the keyboard.