The Duckiebot is a vehicle that can navigate the streets of Duckie town either under remote control or under the control of a Raspberry Pi. The ability to control itself allows Duckie-Bot to be mostly self-sufficient, needing to stop to charge only every 12-18 hours. The Duckiebot uses its indicator lights in order to communicate its next action in a way that can be understood by human drivers. The Duckiebot is powered by two individual motors that are mounted parallel to each other, allowing the Duckiebot to turn; a ball caster provides stability by giving the Duckiebot a third point of contact to the ground. The Duckiebot was made to teach Nuvu students how self-driving cars work and how to build and use a Raspberry Pi, in preparation for more complex future projects.
Modeled after MIT's Duckietown, NuVie Town provides a miniature environment in which small autonomous vehicles can use computer vision to follow roads and obey traffic signals. A ceiling-mounted camera is used for local position similar to how GPS is able to give the current and surrounding locations.
This project will require learning how to code in python (and a bit in C++) and use the OpenCV library along with a few other smaller libraries to make the robot run. Additionally, the concept of telling a robot exactly what to do is very different from directing a person what to do since humans can interperate and deduce what is needed while robotics cannot. However, the coolest part is how this project is a predecessor of current technologies that are emerging and are going to be a prevalent part of human life in the near future.
The process for this project was fairly straightforward. It felt like progress was made every day. Although, this does not mean that challenges were not met. For example, there were a lot of issues with the multiprocessing which kept using all the computer's memory and crashing the program, but even this was eventually solved through a different method of memory allocation. In the end, the final result was a stable program that could navigate the streets of Duckietown.
Duckie Town is a system of miniature artificially intelligent self-driving cars. Local Positioning System is a navigation system that uses an overhead view from a webcam and unique visual identifiers (AR tags) on the ducky bots to locate the Duckie bots.
The webcam reads the AR tags and computes their location using ArUco, an open source tag-tracking library. It then feeds their position to a server where the bots can retrieve their location. Learning computer vision and AI was challenging. Compiling open source software together is much more challenging than it appears. Every Stack has to be compiled and has to not interfere with the other elements.
At the beginning of the studio, we looked at the Original Duckie town. They attempted to read signed and street lines to navigate. I realized that it would be beneficial to assist this navigation and recognition with some larger sense of location using something like GPS. With this sense of position, you could know the local speed limit where stop signs and intersections are and actually navigate from point a to point B. Throughout the studio, I worked primarily on implementing software to identify the tag I looked at multiple different open source software and stacks for implementing them it was challenging to create a functional stack and creating one became the main focus. I knew it wasn't feasible to get all of the other navigation information associated with the location implemented in the 2 weeks. I focused on tag recognition and transmitting the location to the duckie bot. Many of the stacks I tried failed to build. I finally settled on a virtual machine running Ubuntu with OpenCV and the Aruco library running on it. To transmit that data I used ROS the robot operating systems communication suit to create a topic and post the locations to them.
This is a lane detection test on a pre-recorded video. The blue line shows where the Duckie Bot thinks the white line is. The green line shows where the Duckie Bot thinks the yellow line is. The red circle shows the intersection point between the two lines.
THE PRESENTATION POST
This post's privacy is set to Everyone. This post showcases your final design by telling the comprehensive story of how your idea was born, developed, and manifested. The arc of the story should encompass the, How of your project in a compelling narrative. It showcases your design process including your brainstorming, each of your iterations, and your final prototype. It allows the viewer to delve deeply into your process.
You are encouraged to make your narrative as compelling as possible. All of the content below should be included, but if you would like to rearrange the material in order to tell your story differently, work with your coach.
Your presentation is a narrative, and the introduction sets up the scene for that story. Here you introduce the project, say why it is important, and summarize what you did.
TITLE WITH TAGLINE: This slides shows a crisp, clear final image and the title of your project. with a pithy blurb describing the project. The image, name, and tagline should draw a viewer in.
EVOCATIVE IMAGE: This is a single image that shows a clear image that evokes the soul of your project. This image helps set up the why in a compelling way, sets the stage for your narrative, and will help frame the entire presentation. The caption of this slide (set with the Edit Captions button when editing your post) should discuss the context of your project. No Text on the slide.
THESIS STATEMENT: This is a TEXT ONLY slide for which briefly describes the Soul and Body of your project. You can use the project description from your Brief or write something new. This statement ties together your narrative.
The Process Portion of your presentation tells the story of how you iteratively developed your project. Somewhere in that story you should include conceptual and technical precedents that guided you at each stage as well as brainstorming and process sketches and clear photo booth imagery for 3-4 stages of your process.
This portion is made up of three types of slides repeated 3-4 times. Each iteration in your process should include:
The Final stage of your presentation is the resolution of your narrative and shows your completed work. The use diagram shows how your project works and the construction diagram shows how it is assembled. Final photos show the project both in action and at rest. The imagery captures your final built design.
USE DIAGRAM: A diagram showing some aspect of the functionality. These can include:
MECHANICAL DIAGRAM: A diagram offering insight on how the project is put together and functions technically.
ELECTRONICS or OTHER DIAGRAM: Additional diagrams showing some important aspect of your design.
IMAGERY: The last slides should have an images of the final project. These images should be taken in the photo booth, cropped, and adjusted for contrast, brightness, etc. Images should include:
Daria's Brief: The Duckiebot is a self-driving robot that navigates Duckie Town, a miniature city. It was inspired by self-driving car technology, including lane, light, and color detection. The DuckieBot is made up of a simple plastic chassis, a Pi camera, and a Raspberry Pi computer that processes all the programs that control the bot. To control the robot, the Pi camera records a live video stream, which is then processed by the Raspberry Pi according to pre-written Python programs. To detect the traffic lights, it crops the image and adjusts the image's color values to find the contours of a traffic light; it then uses color recognition to see what color the light is. To detect lanes, the computer uses color recognition to find the center dashed yellow lines, right white lines, and red stop lines. It then uses Canny Edge Detection and the Hough Transform to find the edges of these lines. After combining all of the endpoints of the lines on each side of the lane marker, it finds the best fit lines for both sets of points and averages them to determine the center of the lane marker, which the robot then follows.
The Nuvie Town Studio teaches students a lot about coding and computer visions. Students end up knowing how to write complex color detection, lane detection, and other algorithms while still being accessible by people without a strong background in computer science. These algorithms are still powerful and rewarding, even at a basic level.
The Duckie-Bot is a small, car-like robot that can drive autonomously to navigate a miniature town. It is inspired by self-driving cars and the computer vision behind them, including image, light, and color detection. An onboard Raspberry Pi controls the Duckie-Bot’s two wheels and filters a camera feed from the front of the robot to detect key features of the road. To detect the features, the feed is cropped to the region of the image where the feature is located. Then it’s converted to HSV (hue, saturation, and value), a different way of representing colors that makes it easier to detect lights and contrast. The third set of filters varies with the feature they are trying to detect, but in each case, they look for contrasting parts of a specific color of the image. For streets, that might be a yellow line that contrasts against the black street. For lights, it could be a bright green light against the background. Once the feature is detected, the robot determines what to do with the information, for example: turn to follow the line, or stop at the red light, and then powers the motors that turn or stop the corresponding wheels.
The Duckie-Bot provides students with a basic model of a self-driving vehicle without the expense. Operating the robot requires knowledge of computer vision, how to filter images to detect key features, determine what to do with the information gathered, and program navigation. The Duckie-Bot shows that computer vision and self-driving technology is easily accessible, and makes the artificial intelligence behind it seem less like a mystery, and more like a puzzle which pieces just need to be put in the right place.