Home
Student Gallery
Enrollment Summer 2018 Future Worlds (Ages 11-13) Summer 2018 NuVu at MIT (Ages 14-18) Summer 2018 NuVu at MIT Residential Summer 2018 PreVu (Ages 5-7) Academic Year Program
About Us What is NuVu Calendar Team + Advisors Partners Blog Press Jobs Contact Us
Nuvu X What is NuVuX Offerings Partners
Reset Password
AI: NuVie Town

  • Artificial intelligence (AI), deep learning, and neural networks are omnipresent in today’s industries and products, including transportation, medical diagnosis, search engines, shopping and marketing, autonomous vehicles, social media, remote sensing, and many more. They offer machine learning-based techniques that can be used to solve large scale problems - problems that are out of reach of human capability given their complexity to both model and solve. We’ll take these concepts and apply them to NuVietown, a scaled down futuristic mini-city, where we will explore autonomous vehicles, algorithms for creating efficient transportation routes, and how to design single-robot and multi-robot behaviors for interacting with one another and the environment.

    In this studio, we’ll begin by discussing what it means to have machine intelligence. We’ll go deep into how artificial neural networks use statistical models (modeling biological neural networks) to process relationships and then use learning algorithms and optimization techniques to learn from observed data and improve their models toward an optimal solution. We’ll also learn about the power of deep learning to consume and process very raw input data in order to calculate a target output. Get ready to dive into deeper learning, and apply it hands-on in NuVie Town!

    Focus Skills/Subjects/Technologies:

       Machine Learning (AI, Deep Learning, Neural Networks)

       Physics (Electricity, Magnetism)

       Engineering

       Programming

       Electronics

       Robotics (Arduino, Sensors, Actuators)

  • The Duckiebot is a vehicle that can navigate the streets of Duckie town either under remote control or under the control of a Raspberry Pi. The ability to control itself allows Duckie-Bot to be mostly self-sufficient, needing to stop to charge only every 12-18 hours. The Duckiebot uses its indicator lights in order to communicate its next action in a way that can be understood by human drivers. The Duckiebot is powered by two individual motors that are mounted parallel to each other, allowing the Duckiebot to turn; a ball caster provides stability by giving the Duckiebot a third point of contact to the ground. The Duckiebot was made to teach Nuvu students how self-driving cars work and how to build and use a Raspberry Pi, in preparation for more complex future projects.

  • Modeled after MIT's Duckietown, NuVie Town provides a miniature environment in which small autonomous vehicles can use computer vision to follow roads and obey traffic signals.  A ceiling-mounted camera is used for local position similar to how GPS is able to give the current and surrounding locations. 

    This project will require learning how to code in python (and a bit in C++) and use the OpenCV library along with a few other smaller libraries to make the robot run. Additionally, the concept of telling a robot exactly what to do is very different from directing a person what to do since humans can interperate and deduce what is needed while robotics cannot. However, the coolest part is how this project is a predecessor of current technologies that are emerging and are going to be a prevalent part of human life in the near future.

    The process for this project was fairly straightforward. It felt like progress was made every day. Although, this does not mean that challenges were not met. For example, there were a lot of issues with the multiprocessing which kept using all the computer's memory and crashing the program, but even this was eventually solved through a different method of memory allocation. In the end, the final result was a stable program that could navigate the streets of Duckietown.

  • Duckie Town is a system of miniature artificially intelligent self-driving cars. Local Positioning System is a navigation system that uses an overhead view from a webcam and unique visual identifiers (AR tags) on the ducky bots to locate the Duckie bots.

    The webcam reads the AR tags and computes their location using ArUco, an open source tag-tracking library. It then feeds their position to a server where the bots can retrieve their location. Learning computer vision and AI was challenging. Compiling open source software together is much more challenging than it appears. Every Stack has to be compiled and has to not interfere with the other elements.

    At the beginning of the studio, we looked at the Original Duckie town. They attempted to read signed and street lines to navigate. I realized that it would be beneficial to assist this navigation and recognition with some larger sense of location using something like GPS. With this sense of position, you could know the local speed limit where stop signs and intersections are and actually navigate from point a to point B. Throughout the studio, I worked primarily on implementing software to identify the tag I looked at multiple different open source software and stacks for implementing them it was challenging to create a functional stack and creating one became the main focus. I knew it wasn't feasible to get all of the other navigation information associated with the location implemented in the 2 weeks. I focused on tag recognition and transmitting the location to the duckie bot. Many of the stacks I tried failed to build. I finally settled on a virtual machine running Ubuntu with OpenCV and the Aruco library running on it. To transmit that data I used ROS the robot operating systems communication suit to create a topic and post the locations to them.

  • This is a lane detection test on a pre-recorded video. The blue line shows where the Duckie Bot thinks the white line is. The green line shows where the Duckie Bot thinks the yellow line is. The red circle shows the intersection point between the two lines. 

  • THE PRESENTATION POST

    This post's privacy is set to Everyone. This post showcases your final design by telling the comprehensive story of how your idea was born, developed, and manifested. The arc of the story should encompass the, How of your project in a compelling narrative. It showcases your design process including your brainstorming, each of your iterations, and your final prototype. It allows the viewer to delve deeply into your process.

    • Every Slide should have a Title and Caption.
      The body of this post is The Brief. You should include a version of the Brief for each collaborator in the project.
    • This post will be used in your review presentation at the end of the session.

    You are encouraged to make your narrative as compelling as possible. All of the content below should be included, but if you would like to rearrange the material in order to tell your story differently, work with your coach.


    INTRODUCTION PORTION

    Your presentation is a narrative, and the introduction sets up the scene for that story. Here you introduce the project, say why it is important, and summarize what you did.

    TITLE WITH TAGLINE: This slides shows a crisp, clear final image and the title of your project. with a pithy blurb describing the project. The image, name, and tagline should draw a viewer in. 

    Examples:

    • The Fruit - A line following, light tracking robot
    • Segmented Vehicle - A vehicle that conforms to the landscape
    • Cacoon - Wearable sculpture exploring the concept of transformation and death

    EVOCATIVE  IMAGE: This is a single image that shows a clear image that evokes the soul of your project. This image helps set up the why in a compelling way, sets the stage for your narrative, and will help frame the entire presentation. The caption of this slide (set with the Edit Captions button when editing your post) should discuss the context of your project. No Text on the slide.

    THESIS STATEMENT: This is a TEXT ONLY slide for which briefly describes the Soul and Body of your project. You can use the project description from your Brief or write something new. This statement ties together your narrative.

    Examples:

    • The Cocoon:  A wearable sculpture that explores the concept of transformations and death. The Cocoon explores the spiritual journey beyond the human experience; what it means to be human, how wonder effects us, and the concept of what happens after death.
    • Body Accordion: A musical prosthetic that translates the wearer’s body movements into a dynamic multimedia performance. The Body Accordion converts flex sensor input to sound through Arduino, MaxMSP, and Ableton Live. 
    • Seed to Soup Animation: A whimsical animation about the slow food movement. Seed to Soup showcases a holistic method of cooking. From garden, to kitchen, to dinner table.
    • Antlers: A wearable sculpture inspired by antlers found in the deer and antelope family. "Antlers" explores the comparison between armor and attraction. 

    PROCESS PORTION

    The Process Portion of your presentation tells the story of how you iteratively developed your project. Somewhere in that story you should include conceptual and technical precedents that guided you at each stage as well as brainstorming and process sketches and clear photo booth imagery for 3-4 stages of your process.

    This portion is made up of three types of slides repeated 3-4 times. Each iteration in your process should include:

    • PRECEDENTS:  Precedents are any projects that inspired you creatively or gave you technical guidance. These can include conceptual precedents and technical precedents. No Text.
    • SKETCHES/SKETCH CONCEPT DIAGRAMS: These slides show your generative ideas in sketch form. These should clean, clear drawings. A sketch should show a clear idea. Do not simply scan a messy sketchbook page and expect that people will understand. If you do not have a clear concept or working sketches it is fine to make them after the fact. No Text.
    • PROTOTYPE IMAGES:  These are actual images of the prototypes  you documented in your daily posts. These images illustrate your design decisions and how your project changed at each step. No Text.

    FINAL PORTION

    The Final stage of your presentation is the resolution of your narrative and shows your completed work. The use diagram shows how your project works and the construction diagram shows how it is assembled. Final photos show the project both in action and at rest. The imagery captures your final built design.

    USE DIAGRAM: A diagram showing some aspect of the functionality. These can include:

    • How one uses or interacts with the project
    • The overall behavior of the project over time
    • For a complex interactive project, this can be a clear diagram of the software behavior

    MECHANICAL DIAGRAM:  A diagram offering insight on how the project is put together and functions technically.

    • Ideally, this will be an exploded axonometric
    • At minimum this can be a labeled disassembled photo  

    ELECTRONICS or OTHER DIAGRAM: Additional diagrams showing some important aspect of your design. 

    IMAGERY: The last slides should have an images of the final project. These images should be taken in the photo booth, cropped, and adjusted for contrast, brightness, etc. Images should include:

    • An image of the project in use (taken in the booth or at large). This should include a human interacting with the project.
    • Images of project alone. Include at least one overall image and one detail image.
    • You can also use an image In-Use. 
    • Consider using a GIF to show how the project works. 

     


  • Daria's Brief: The Duckiebot is a self-driving robot that navigates Duckie Town, a miniature city. It was inspired by self-driving car technology, including lane, light, and color detection. The DuckieBot is made up of a simple plastic chassis, a Pi camera, and a Raspberry Pi computer that processes all the programs that control the bot. To control the robot, the Pi camera records a live video stream, which is then processed by the Raspberry Pi according to pre-written Python programs. To detect the traffic lights, it crops the image and adjusts the image's color values to find the contours of a traffic light;  it then uses color recognition to see what color the light is. To detect lanes, the computer uses color recognition to find the center dashed yellow lines, right white lines, and red stop lines. It then uses Canny Edge Detection and the Hough Transform to find the edges of these lines. After combining all of the endpoints of the lines on each side of the lane marker, it finds the best fit lines for both sets of points and averages them to determine the center of the lane marker, which the robot then follows. 

    The Nuvie Town Studio teaches students a lot about coding and computer visions. Students end up knowing how to write complex color detection, lane detection, and other algorithms while still being accessible by people without a strong background in computer science. These algorithms are still powerful and rewarding, even at a basic level. 

    Sina's Brief:

    The Duckie-Bot is a small, car-like robot that can drive autonomously to navigate a miniature town. It is inspired by self-driving cars and the computer vision behind them, including image, light, and color detection. An onboard Raspberry Pi controls the Duckie-Bot’s two wheels and filters a camera feed from the front of the robot to detect key features of the road. To detect the features, the feed is cropped to the region of the image where the feature is located. Then it’s converted to HSV (hue, saturation, and value), a different way of representing colors that makes it easier to detect lights and contrast. The third set of filters varies with the feature they are trying to detect, but in each case, they look for contrasting parts of a specific color of the image. For streets, that might be a yellow line that contrasts against the black street. For lights, it could be a bright green light against the background. Once the feature is detected, the robot determines what to do with the information, for example: turn to follow the line, or stop at the red light, and then powers the motors that turn or stop the corresponding wheels.

    The Duckie-Bot provides students with a basic model of a self-driving vehicle without the expense. Operating the robot requires knowledge of computer vision, how to filter images to detect key features, determine what to do with the information gathered, and program navigation. The Duckie-Bot shows that computer vision and self-driving technology is easily accessible, and makes the artificial intelligence behind it seem less like a mystery, and more like a puzzle which pieces just need to be put in the right place.

  • next