Home
Student Gallery
Enrollment Summer 2019 Bioinfinity (Ages 11-13) Summer 2019 NuVu at MIT (Ages 14-18) Summer 2019 NuVu at MIT Residential Academic Year Program Fall 2018 PreVu
About Us What is NuVu Calendar Team + Advisors Partners Blog Press Jobs Contact Us
Nuvu X What is NuVuX Offerings Partners
Reset Password
  • Telepresence is “the use of virtual reality technology to operate machinery by remote control or to create the effect of being at a different or imaginary location.” The goal of the “Telepresence Robot” studio was to design and build a robot which a human user can remotely inhabit and control. More specifically, the intention of this studio was to enable a teacher to instruct students from around the world.

    The telepresence robot, telebot for short has a wooden frame with translucent plastic sheets bolted on to it. It also has internal LEDs so that it can change color and display the different moods of the operator. Using translucent plastic allows the entire robot to glow without using many LEDs.

    In order for an instructor to effectively teach a student, he has to be able to inspect the projects that his students are working on. On the Telepresence Robot, the arm is designed to hold just one thing: a camera that aims downwards. Normally, if there is something on a table, like a drawing or a project, a human would lean over to see it. Since the telepresence robot cannot lean, the arm extends over something, to give the operator of the robot a wider viewing range.The arm also extends and retracts, to see more or less, and can fold into the robot. In all, the arm lets people using the robot see objects or tables that they cannot normally see, helping them be more telepresent.

    The telepresence robot also has a head that raises up and down. This enables the operator to become “taller” or “shorter” and interact with people of different heights. The head has a 5 inch screen that displays the operator’s face as well as a camera that allows the operator to see from the front. This camera acts as the "eyes" of the robot. Telebot also has a wide angle camera that allows the operator to avoid obstacles and see a full 360 degree view of the surroundings of the robot. In addition to cameras, the head of the robot is attached to the robotic arm. The arm rises and lowers with the head, allowing the user to see closer or further away from the top of a table.

    To communicate with all of the different parts, the robot uses a series of Arduino microcontrollers that are attached to a laptop. The laptop runs a visual programming language called Max MSP and communicates with the computer that the operator uses.

    The robot drives using a three wheeled system. The front two wheels are powered and the back wheel is a caster, allowing a full range of motion. The wheels are large enough that the robot can easily drive over wires. In addition, this wheel configuration enables the robot's movement controls to be interactive.

    The operator controls the robot using a device called a Leap Motion. This device tracks the operators fingers, and the software translates the motions of the operator’s fingers to commands that the robot runs. These commands move the robot, change the led colors, move the arm, etc.

    The operator also controls the robot through the use of facial recognition. This allows the robot to turn different colors when the operator makes different faces. For example, if the operator looks angry, the robot will turn red.

    Using this robot allows a person to interact with and act in an environment as though he was there. It is also significantly cheaper than any other telepresence robot on the market. This design has potential and could be turned into a real product.

  • The user interface team began with a blank slate, and our initial brainstorming was all over the map. The first interface we designed was very simple, using only the keyboard and the arrow keys. This was not intuitive, and had limited potential. Next we transitioned to an ipad interface, and finally decided on an interface using only gesture control and facial recognition. This was ideal because it could all be done without touching a button, and the only additional equipment required was a leap motion. It was possible to control every feature with this simple interface. If one pointed with a finger at their screen, the leap motion would track the vector and point the camera. Depending on how many fingers were pointed, different features such as the laser or robotic arm can be used as well. Facial recognition allowed leds mounted in the robot to display the operators mood. The operators eyebrow height governs the robots led color. In case the user did not have a leap motion or a webcam, every feature could also be controled with a mouse pad joystick and the keyboard.

  • The arm of the robot was designed to hold a camera, but originally, its aims were much different. In the beginning, we thought of a multitude of ideas. These ranged from the arm to a bubble blower to a projector, some more possible (not to mention silly) than others.

    Eventually, 2 ideas emerged, ready to be made: a laser pointer, to be the same as a finger pointing at things, and an arm, to hold things. The laser pointer was done very quickly, so work started on the arm by drawing and modeling how the arm would work. Eventually, a design kind of like a human arm was chosen, with one motor at the “shoulder”, one motor at the “elbow”, and two flat “bones” to swivel.

    This didn’t work. Placing a motor at the elbow would break the arm with its weight with our current design. The solution was to make a linkage: two wires that went from the servo to the arm, letting us have the same torque but place the weight at the more stable shoulder. Here there was another problem: the wires would run into each other. After a bit more thought, the wires were moved up and down, to make them not hit each other. This design was also scrapped because the wires would eventually bend to the point where they were coming into contact with each other again. Two things were needed: higher gauge wire, and a different system.

    If the arm couldn’t handle the weight of a motor, it definitely wouldn’t be able to handle the weight of picking up things, so instead, the arm would hold a camera, facing downwards. This would let the user of the robot look at things on top of a table, to get a closer look at things the robot itself wouldn’t be able to reach.

    To solve this, new pieces were laser cut, and holes were cut in them to reduce the weight. The two wires that were interfering were replaced with one, to help with weight with almost no strength reduction, and it connected to a motor, to control the elbow joint from afar. This design worked, but still wasn’t strong enough to the camera.

    To solve this, a plate was made. This plate would go around the motor, and would be attached to a metal bearing to help hold more weight. This allowed the shoulder to swivel while holding weight. A few decorative plastic panels were added, and then the arm was done.

  • The drivetrain was a quite interesting process.  We had a lot of trouble in the beginning because we had to decide which wheels to use, how many to use, and what the shape of the base would be. We originally decided to make the base a square shape with four omni wheels.  Since we were looking for a shape that would provide the design of the robot with the most surface area per side of the base, we decided a square would not be the best choice.  

    We then decided to use either a pyramid or a triangle shaped base with three wheels. In order to strafe using the omni wheels, we had to use more that 120W of power. Since the robot would not have that big a battery, and sideways movement was not an integral factor, we opted not go with the omni wheels. Designing the base was quite difficult considering we didn’t have a good way to mount the wheels to it.  

    In our next prototype, we laser cut a pyramid shaped base out of plywood to use for the robot.  We used this shape because it gave more room in the base for wires, computers, servos, etc.  Eventually, we found that it would be easier to construct the body of the robot if the base was a pentagon that was shaped like a house.

    Since we wanted the robot to be manuverable, we decided to use three wheels. The two in the front were motorized and the one in the back was a caster. Since the front wheels needed to be strong enough to move the entire robot, we used motors that are used to roll up car windows. For the back wheels. we used laser-cut circles made of plywood.

  • Our team was in charge of designing a way for the robot to display emotions. They would be displayed either through lights or movement. It soon became clear that the LED lights would be the perfect way for the robot to display emotions by assigning a color for each specific emotion, such as celebratory or angry.

    The next challenge was how to attach the LED light. We had the idea of placing LEDs inside the robot rather than having them outside and exposed to the human eye. Then we had to design and 3-D print holder for the LED lights. The final product turned out perfectly.

    After we finished designing how the LEDs would be held, we had worked on making them actually show emotions. We went through a lot of trial and error with the code, but came up with some basic light changes and configurations. After that, we started making them fade and blink which was a lot harder. Then we had to make the LEDS change by using the serial monitor. So when we were done making it work using the serial monitor we had about 7 emotions that could change by putting in numbers in the serial monitor.

    Next, we collaborated with the user interface team, and turned the emotions into buttons you could click on the screen. The final product was 7 buttons, that changed the robot’s emotions. If you wanted to show anger then the colors would fade between red and orange. Celebratory would have the LEDs fade between red and green similar to christmas colors.

  • The purpose of this project was to make a device to raise the neck higher or lower with a motor and an Arduino. There was a lot of thinking and planing that went into this piece. 

    The first task that had to be done was to find out what objects were going to be stable enough to raise and lower a heavy piece smoothly. It was decided that a drawer slide would be the best option because it is metal and has bearings in it

    . Originally a pulley system seemed like it might be an easy way to raise and lower the neck. But after processing the outcomes, one of the cons would be that the pulley would easily become tangled and stop functioning.

    The next best option was to use a screw and a bolt mechanism. This was smart because as the rod would spin, and  the bolt would move higher or lower. This would cause the neck to move up or down. There had to be some way to attach this piece to the robot so a device was built with two wooden pieces that were going to hold everything together. These pieces were designed on rhino and precisely laser cut. These pieces were then screwed to the drawer slider.

    Once this was assembled the motor had to be attached to the wooden plates. The original idea was to use zip ties but they ended up stretching more than expected.  Metal doesn't stretch so metal ties were used to hold the motor in place.

    The last thing that needed to be done was to find a way to attach the bolt to the drawer slide. A plastic piece was designed in rhino that could not only hold the bolt, but also easily attach to the drawer slide. The 3D printer made this piece very precisely and because of that it was very easy to attach.

    Through the course of the studio, this project came from a small thought on paper to actually being a working device. It now is in the robot and is controlled by an Arduino. Although the neck raises and lowers slowly, it functions and is rather difficult to break. 

  • As the overall design team progressed I took on the responsibility of the body design, prototyping, fabrication, and overall coordination of parts.

     

    The design team provided me with an initial sketch for the overall look of the robot. I began the design process by creating a 3-D model, in Rhino, based on the sketch. I then laser cut a scaled down flat version of the model out of paper and assembled it into a 1/4 scale model, I did this to help the other groups visualize the look of the robot.

     

    My next step was to make a new 3-D model of the robot with which I could laser cut and put together into a structure. In order to do this I had to account for the thickness of plywood and the angle at which the wood would rest. After I designed the new model we thought of a new design in which there would be a wooden frame for each of the sides and we would screw HDPE (High-density polyethylene) over the frame. This way we could have LEDs on the inside of the robot.


    Next I worked on the problem of designing a bracket which has two uses; Holding the frame of the robot together and holding up a shelf which would be put up in the middle of the robot to hold the electronics. I created two iterations of the bracket but do to time constraints I had to come up with a quick solution for the problem of holding the frame together. I did this by laser cutting a piece of plastic for each edge and I attached one end of it to one panel and the other end to the other panel.

  • Today we did 90% of the software. We got the robot to move with the leap motion, but the motors went very slowly. The only pieces of software we did not get working yet were the servos, we have to get the servos for the laser and the servos for the camera. We also made a mount for the laser servos because it will have to raise above the camera for it to point at anything. Today we also edited the Max software which was fit to go to servos not to motors, so we had to edit that. By the end of the day we got it up and working with the software.

     

    Tomorrow will be a very very hard working morning. We only have until 12:45 to complete everything, all of the rest of the robot. Although we do not have a lot to do, we still have to touch everything up and make sure that our mini robot is perfect for our presentation. I hope we can finish it in time. 

  • WE HAVE A FREAKING ROBOT ARM THAT CAN ATTACH TO STUFF
    Ahem.

    We started working on the mounting of the arm. We 3d printed a little servo mount (we had to print it 3 times!), and added a swivel to it. We also attached it to a prototype robot top. 

    Another thing we started doing was designing the top of the robot. It better be done by tomorrow, and then we'll have a chance to present our awesome work.

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • next

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?

     

  • What is Telepresence?

    Telepresence uses technology to allow a person to feel as if they were present, to give the appearance of being present, or to have an effect, via telerobotics, at a place other than their true location. Generally Telepresence is at least partially bi-directional, with both the operator and people on the other end able to interact in a fairly natural manor.

    Telepresence essentially transmits sensory information (sound, video etc) to allow the operator to interact with a situation that is at some distance from them as if they were there. Think about it like transportation or teleportation…  without all the messy bits of actually needing to leave your desk.

    Virtual Reality can be a special case of telepresence where multiple people shift their senses to a shared space that does not actually exist.

    Telepresence in the Future

    Generally, as of now we have been able to shift our eyesight and hearing anyplace on earth (and beyond!). What’s Next?

    With new sensor technology like the Kinect and Leap Motion, we have new, more natural ways of getting our physical body’s movement into the computer. Once digitized, that information can be sent over the internet, world wide and used to control robots, which can send data back to us.

    Slowly our ability to transport ourselves becomes more and more immersive.

    What do you think some of the next possibilities are that will open up with these new technologies?What are we still missing?