Final: Prototype Testing

On Tuesday, March 27th we did testing of the prototype solutions with the young children from the Eliot Pearson School.

Robotics Students: please provide some documentation of your prototype solution below.  This does NOT have to be extensive.  Please include the following:

  • Name of your project (and add your teammates as authors of the post)
  • Brief (couple sentence) description of your current technical solution
  • Pictures (and if you have a video clip of it "in action" then great) of what you developed
  • Your code
  • And brief (couple sentence) description of major updates/changes you'll be making for the final version (to be tested Tuesday, April 10th)


Prototype Testing

Enigma: Spaco's Quest

The goal of this project was to create a game that would challenge children's spatial awareness skills and help them develop a sense of left, right and forward. The technical solution we worked to create was a flat board divided into a grid to form our maze and a small LEGO Robot that would move from square to square based on directions given to it remotely. The remote directions would be given by buttons pressed controlled by a second EV3. For the prototype testing we succeeded in using a maze runner robot to follow a pre-set list of directions and solve a maze, however we did not succeed in creating device-to-device communication. For prototype testing, we were unable to use our second EV3 controller and instead used printed arrows for the kids to lay out their plan of directions.

Physical Components:

The components used during the testing were a Lego EV3 maze-runner robot, maze board , colored tiles , and printed arrows to represent the directions the kids would want the robot to go.  The robot and board were re-purposed from a previous project, and the tiles and printed arrows were made out of construction paper.

Code:

Since we were unable to make device-to-device communication work in time for the testing, we settled for having the kids select the list of directions and by laying out pictures of arrows that would represent left, right, and forward, which would be manually inputted by a Tufts student into the EV3 as a text file to be read. The robot would then interpret this input data and use the gyro sensor and motors to run the maze.

mazerunner.py

The first video below is an excerpt of the video's taken from the testing day, which shows the kids interacting with the technology and the child development students explaining the the overall game to the kids. The second video shows a test of the robot solving a full maze and crossing multiple tiles.

Takeaways and Future Plans:

Some key insights we gained from developing the prototype were that the using the EV3 for inter-device communication is really difficult because of how slow they are to use, and that implementing an RPi controller would make our lives a-lot easier and increase the amount of work we can do. Another takeaway is that our game is at a good level for the kids to solve and they do get better at solving it and interacting with the technology as our time with them progressed, which shows that our interface/psuedointerface worked with with the kids as the users. We also noticed that the kids enjoyed physically laying out their direction sequence with the direction tiles. The initial plan was for the kids to enter in the directions for the robot via buttons with arrows on them. However, the tiles allowed for the kids to visually see their inputted route which would not have been possible with the button controller.

For the final testing day we plan to fix the device to device communication to make a networked system, use an RPi as the controller to increase the speed at which the whole system runs, and create a push button interface to compliment the graphical one we used in prototyping. We will also be making a entirely new controller so that the kids may enter their direction sequence directly to the robot using tiles. Additionally, we will have the robot say the direction it is currently moving, so that the kids can follow along as the robot moves through their created sequence.

Apr 5 2018 Read →

Cooperation Car - Moral Development

Current Technical Solution:

The purpose of our project is to encourage moral development of children ages 5-7.  We decided to focus our project in on cooperation and create a technology that encourages the users to work together to reach an end goal.

Users are each given four "fuel" tokens of the same color.  The robot created has four slots for fuel, meaning that a single child is able to fill up all the slots with just their own color. The robot drives forward certain predetermined distances based on the variety in the colors "played" by the children. Ideally, children are able to learn that by combining their fuels, the car will go further.

Image

Code:

Updates & Changes:

As a team, we were able to gather a lot of useful feedback in terms of the functions of our first prototype.  One major change that we will be making is removing a slot so that each child will have been included when the "correct" combination has been reached.  Additionally, we were calling our robot a car but it did not look like a car and we got the feeling that sometimes the children were a little confused.  For the final prototype, we plan to add a frame to our car as shown below:

Image

Another issue we ran into was that the children were inclined to believe that the different car speeds were a result of the colored fuel tokens representing different amounts of energy. To combat this, we changing the tokens to colored keys instead, with the same functions as before.

Apr 3 2018 Read →

Slice It!! Robotics Final Project

For the current iteration of our project, we have decided to go with a photo resistor solution to detect placement of pieces on our game board. For the hardware, we had to wire up the whole board to the 22 ports on the Arduino and due to being limited as a digital signal it would either register that yes something is placed, or there is nothing placed.

For the software, we have various codes each done with various intents. The different codes have designated levels and modes, for the level of difficulty that the game will be played under. This will not be in the final code as we realized as it currently stands the game is in fact too easy for the students so something challenging for them would be funner.

Due to ease of use the coding was done on Matlab, the final project will have it done on Python 2.7. For out prototype, the procedure was to manually choose the level of play to access the level of the students that we had.

Image Image Image

Notes during Prototype Testing:

  • First girl recognizes fractions

  • Can do half third quarter and eighth

  • This girl is too smart

  • Recognised two thirds when number and picture are given

  • She's killing it

  • 2/4 = ½

  • Kids here started learning fractions in first grade (?)

  • Game pieces look like pizza

  • Computer records all ancestors

  • Were able to get ⅛ and ½ without picture

  • Kids have maintained focus for at least 15 minutes

  • Only kids to visit have been girls age 7+

  • Make halves yellow

  • Make thirds blue

  • Make Game Board spin around each time you get it right

  • Kids like the idea of a 1v1

  • Want to compete against each other

  • Kids want game to light up

  • Want happy music

  • Want it to say funny things

  • Jokes?

  • Why did the chicken cross the playground

  • To get to the other slide

  • How do you like “Slice it”?

  • Kids say call the game pizza slices

  • Make the game pieces look like pizza

Feedback:

During our testing we had a lot of feedback on how to proceed in the future. Apparently we were underestimating students in second grade. At that point they had a strong grasp and understanding of simple fractions which made our game seem almost easy to them. Moving forward we can advance with our idea knowing that it will not prove challenging and frustrating to them.  Hardware wise, we will go and make an image processing system that will be stored in the Arduinos and not require any photoresistors as they proved to be quite difficult to work with. The arduino was unable to detect some of the pins and thus prove innefective. Due to the ease of use in Python 2.7 we will continue with it and not use Python 3.0 for its lack of functionality with SimpleCV.

Hardware Gallery:

Image Image

Prototype Testing:

Image Image

Apr 1 2018 Read →

Racing Emotions

Shunta Muto, Osvaldo Calzada, Madhu Govind, Mary Egwim Rhn

Project Description:

Racing Emotion game consists of storytelling, guessing emotion, and racing of the car. Each player tries to guess emotion of the character in the story, and the player's car moves according to how appropriate the player's answer was to the answer.

Pictures/Video

Link to the code:

https://docs.google.com/document/d/1ihVaxVHJHGxbavSQEOfVEpWjbpKzN1jfOdprhIQG1So/edit?usp=sharing

Description of major changes/improvements

** In short, we will be making changes in technical aspects such that it is easier for kids to understand how there is no wrong answer for guessing the emotion. From implementation aspect, we will think about how to proceed the game more smoothly and how to introduce the emotions to the kids in the beginning. Link to the detailed descriptions is below:

https://docs.google.com/document/d/138UQb31s7h-FwBpbm873Yk6wjwBbPiAuQ2oTNaBpoh8/edit?usp=sharing

Mar 31 2018 Read →

SimQuake

Description

Currently, we have a structure made out of lego pieces. This structure is controlled by a servo programmed by a raspberry pi. The pinion on the motor is in contact with another gear, which is in turn connected to a few lego pieces that serve as an arm. The arm is then attached to an acrylic board. The servo is programmed to rotate when a button is pressed. As the motor moves, it is in contact with gears which move a linkage through a pathway and create linear motion so the acrylic board moves back and forth in one direction.

SimQuake in action

Movement of Gears

Code

shaketable.py

Updates/Changes

  • Display for the timer to see how long the structure can last on the table

  • Better translation of energy from linear to rotational

  • Lego motors for more power (as opposed to the servos)

  • Changing the mounting of the servo - we have 3d printed pieces to hold the motor up - so that no one has to hold the motors physically.

  • Use 2 motors instead of just 1

Mar 29 2018 Read →

Narrabots

Robert Hrabchak, Riley Kolus, Kyle Paul, Meha Elhence, Hyejin Im

The technical component of the Narrabots system is a mobile robot that travels on a board representing the emotion spectrum. As the narrator tells a story and periodically asks questions, the child picks from the cards shown below to represent an emotional state.

Image

The robot reacts appropriately by moving positions on the color wheel to reach the correct color for the chosen emotional state. Below, green represents jealousy and the darker shade represents the most intense feeling.

Image

For the next iteration, the robot will have a line following sensor that will enable it to move to the correct positions on its own power. The children also indicated that they wanted to robot to hold the cards as they were chosen, so the next iteration will have a place to insert the cards within the robot.

Mar 29 2018 Read →

Seaquence

(Formerly "Six Seas of Technological Development")

Description of Current Technological Solution

Hardware wise, our project has four separate colored stations, or EV3 bricks. Each station has a button and a flag fixed to a motor of the proper color. Each station will be capable of rotating the flag, to indicate that it is a part of the sequence, and sensing touch as the children respond.

These four stations are all in turn controlled by a Raspberry Pi that acts as the “Simon”. This central computer runs the code that instructs the four stations to display a sequence of colors for the children to repeat. Then, it waits and receives information from the four stations to piece together a collective response. Finally, it analyzes the response to determine success or failure.

Software wise, the code is all one file, run by the Raspberry Pi. First, it initializes all variables and devices for the game and then defines two subfunctions. The first is for recording responses when only one color input is expect. The second is a function for recording two colors at once when the game requires the children to press two buttons simultaneously. From there, each level is a loop that iterates over the three sequences for each level. Within each level loop, an embedded while loop contains the rest of the logic. Within this embedded while loop, a series of lines instructs the four stations to produce the sequence to be repeated based on the index of the parent level loop. After producing the sequence, the code awaits sensory input using one of the subprograms defined above. Concluding the loop are two if statements that determine whether the response is correct or incorrect. When a correct solution triggers the if statement, the loop will be broken; when an incorrect solution is found, the loop resets by showing the sequence again. An incorrect solution is only identified when the length of the response is the same as that of the solution.

One additional feature included for prototype testing was a "kill switch" which aborts the current sequence in the case that the children become confused or are otherwise struggling. It is also used to pause between levels, allowing time for the audio instructions to play.

During testing, we broke the code into three separate files. Each file contained the initialization code, subfunctions, and one level. This allowed us to jump around throughout the game more easily.

For the sake of testing, the built-in-map and sounds were both faux-robotic (a PowerPoint presentation and keyboard sounds over a Bluetooth speaker), operated by one of our team members during testing.

Currently, there are no "captain" or "treasure chest" levels for the end of the game.

Pictures and Videos

Image

Current Code

#!/usr/bin/python3 def main(): # define hardware gpg = Device("this") killSwitch = gpg.init_button_sensor("AD2") print('Pi Initiated') stationR = Device('172.16.216.92') motorR = stationR.LargeMotor('outA') buttonR = stationR.TouchSensor('in1') print('Red Initiated') stationY = Device('172.16.143.207') motorY = stationY.LargeMotor('outA') buttonY = stationY.TouchSensor('in1') print('Yellow Initiated') stationG = Device('130.64.142.61') motorG = stationG.LargeMotor('outA') buttonG = stationG.TouchSensor('in1') print('Green Initiated') #stationB = Device('') #motorB = stationB.LargeMotor('outA') #buttonB = stationB.TouchSensor('in1') #print('Blue Initiated') # "receive" function # 1 = red # 2 = yellow # 3 = green # 4 = blue def receive( response, i ): if buttonR.is_pressed == 1: motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(1) print(response) i=i+1 sleep(1) elif buttonY.is_pressed == 1: motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(2) print(response) i=i+1 sleep(1) elif buttonG.is_pressed == 1: motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(3) print(response) i=i+1 sleep(1) #elif buttonB.is_pressed == 1: #motorB.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #response.append(4) #print(response) #i=i+1 #sleep(1) return # "receive2" function # 5 = red and yellow # 6 = red and green # 7 = red and blue # 8 = yellow and green # 9 = yellow and blue # 10 = green and blue def receive2( response, i): if buttonR.is_pressed == 1 and buttonY.is_pressed == 1: motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(5) print(response) i=i+1 sleep(1) elif buttonR.is_pressed == 1 and buttonG.is_pressed == 1: motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(6) print(response) i=i+1 sleep(1) #elif buttonR.is_pressed == 1 and buttonB.is_pressed == 1: #motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #motorB.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #response.append(7) #print(response) #i=i+1 #sleep(1) elif buttonY.is_pressed == 1 and buttonG.is_pressed == 1: motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") response.append(8) print(response) i=i+1 sleep(1) #elif buttonY.is_pressed == 1 and buttonB.is_pressed == 1: #motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #motorB.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #response.append(9) #print(response) #i=i+1 #sleep(1) #elif buttonG.is_pressed == 1 and buttonB.is_pressed == 1: #motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #motorB.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") #response.append(10) #print(response) #i=i+1 #sleep(1) return # wait for go while True: if killSwitch.is_button_pressed(): print("Kill switch") break # LEVEL 1 print('LEVEL 1') j = 1 while j < 4: i = 1 solutionA = [2] solutionB = [3] solutionC = [1] response=[] while True: if i == 1 and j == 1: print(solutionA) sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") i=0 elif i == 1 and j == 2: print(solutionB) sleep(1) motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") i=0 elif i == 1 and j == 3: print(solutionC) sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") i=0 elif (response==solutionA and j==1) or (response==solutionB and j==2) or (response==solutionC and j==3): print(response) print('correct') break elif len(response)==1: print(response) print('wrong') response=[] i=1 sleep(1) elif killSwitch.is_button_pressed(): print("Kill switch") break else: receive( response , i ) j = j + 1 print('LEVEL 1 DONE') # wait for go while True: if killSwitch.is_button_pressed(): print("Kill switch") break # LEVEL 2 print('LEVEL 2') j = 1 while j < 4: i = 1 solutionA = [1,2,3] solutionB = [3,2,2] solutionC = [2,1,3] response=[] while True: if i == 1 and j == 1: print(solutionA) sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 elif i == 1 and j == 2: print(solutionB) sleep(1) motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 elif i == 1 and j == 3: print(solutionC) sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 elif (response==solutionA and j==1) or (response==solutionB and j==2) or (response==solutionC and j==3): print(response) print('correct') break elif len(response)==3: print(response) print('wrong') response=[] i=1 sleep(1) elif killSwitch.is_button_pressed(): print("Kill switch") break else: receive( response , i ) j = j + 1 print('LEVEL 2 DONE') # wait for go while True: if killSwitch.is_button_pressed(): print("Kill switch") break # LEVEL 3 print('LEVEL 3') j = 1 while j < 4: i = 1 solutionA = [5,6,8] solutionB = [6,5,8] solutionC = [8,6,5] response=[] while True: if i == 1 and j == 1: print(solutionA) sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 if i == 1 and j == 2: print(solutionB) sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 if i == 1 and j == 3: print(solutionC) sleep(1) motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorG.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) motorY.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") motorR.run_to_rel_pos(position_sp=360, speed_sp=400, stop_action="hold") sleep(1) i=0 elif (response==solutionA and j==1) or (response==solutionB and j==2) or (response==solutionC and j==3): print(response) print('correct') break elif len(response)==3: print(response) print('wrong') response=[] i=1 sleep(1) elif killSwitch.is_button_pressed(): print("Kill switch") break else: receive2( response , i ) j = j + 1 print('LEVEL 3 DONE') if __name__ == '__main__': main()

Updates for Final Version.

1: During the test we found out it was inconvenient to show kids the sequence again because we had to do a wrong sequence until the program began another loop. Therefore, we plan to add a button to start the challenge over, so that kids can watch the sequence again at any time they feel confused.

2: Since all the levels are quite different from each other, kids often feel confused about a new level even after they finished the previous one successfully. For that reason, we will make more detailed audio instructions for the kids and show them how to do the new sequence first. To achieve that, we need more pauses in the program to ensure all the guidance work smoothly.

3: In our current code, we programmed the motors to rotate a whole circle when showing sequences and when someone pressed buttons. However, this sometimes confused the children, for they didn’t know if the motion was a response of their action or showing them what they should do next. Therefore, we want to distinguish the movement of motors when showing the sequences from the movement as a response to their input.

Mar 29 2018 Read →