Showing posts with label simulation. Show all posts
Showing posts with label simulation. Show all posts

Saturday, 13 May 2023

Programming Robots Virtually 4: Preview of Edbot Studio

In previous post I looked at a few on-line robot simulators (see links below)

A recent addition to these is the Edbot Studio Virtual Playground https://studio.ed.bot/;  a preview of simulation technology for Robots in Schools Ltd Edbot robots. 

Two Edbot robots are shown in a gym; you can select actions for the robots to carry out, including Gangam-style dancing and Head Stands.







This is really a preview of the tech, rather than a programming option - at the moment. Robots in Schools Ltd, who make the Edbots, say the Virtual Playground will be part of their Edbot Studio a browser-based Environment to allow both coding in Scratch, Python and JavaScript, of both virtual and real robots.

I am really curious to see the full Edbot Studio in action when it is released, but for the moment getting virtual robots to dance and kick is still really good fun.


The physical robots -EdBot are available from https://ed.bot/







All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Sunday, 21 February 2021

Escape the Maze with a VR robot - Vex VR




You don't need to buy a robot to get programming a robot, now there are a range of free and relatively simple to start with robot simulators to play with. Three examples are listed below:

It is the last one of these (https://www.vexrobotics.com/vexcode-vr) that is the focus of this post and return to hit, after an earlier discussion in https://robotsandphysicalcomputing.blogspot.com/2020/04/programming-robots-virtually-1-vexcode.html 

Two of the nice things about the package, apart from being free, are it uses a Scratch-like programming language and it provides a 3D environment and models - playgrounds for a number of scenarios. 

So in this post, I will be discussing playing, or rather starting to play with the robot navigating a 3D maze (see the figure above). A feature I particularly like is you can change the views from an overhead view to an onboard version or one that seems to follow the robot.





So as I starting point I programmed it to essentially bounce along the walls keeping the wall on it's right and stopping when the downward 'eye' detects red on the floor for the end of the maze. The sensors include left and right bumper sensors; along with two sensors for detecting colours one facing forward and one down. The code I use is shown below:




It took 8 minutes to solve the maze - which is slow. I would be interested to see the solutions of others being shared. As a simulated robot programming system this is great fun and challenging, I would recommend having a play iot is free and available at https://www.vexrobotics.com/vexcode-vr. I want to have a go with the Python version to replicate or better the solution above (start it as a text project rather than a blocks project when starting a new project).






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Saturday, 6 February 2021

Making a neural network in Tinkercad from Microbits

Tinkercad and microbit neural network

In a previous post I produced a single neuron based around microbits in Tickercad - see here.

To extend this the basic ideas discussed in that the previous post where extended to three microbit joined together. In  other words a network of neurones or neural network.

Basic requirements of a neuron are
Requirements 
- By altering the bias (or w0 in the example), weights change the behaviour of switches changes.
-when switch is pressed a variable x1 or x2 is set to 1 depending on which button is pressed and when released it goes to 0. 
- if (bias+w1*x1+w2*x2)>=0 then a T for True appears of the LEDs otherwise F for False is shown.

So by selecting the weights and connecting the outputs (p2) from the microbits labelled as Red and Green in the image above as inputs to the yellow microbit 'neuron' we can form a neural network. Switches as the inputs and the screen on the yellow 'neuron' as the output of the network showing true (T) or false(F).

So to build a XOR from the 'neurons'
'hidden layer'
Red microbit had the variables w0 set to -1 and W1 set to 0 and W2 set 1
Green microbit had the variables w0 set to -1 and W1 set to 1 and W2 set 0

'output layer'
Yellow microbit had the variables w0 set to -1 and W1 set to 1 and W2 set 1

All of this can be found at https://www.tinkercad.com/things/hPV4nU0Asr5-smooth-bojo or through the link shown below:


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Saturday, 25 April 2020

Programming Robots Virtually 1 - VEXcode VR

For a number of years, I have been playing with robots as a means of developing programming/coding skills with students. The problem is when classes get larger or it is used as part of an assessment there is very rarely enough robots to satisfy all the students Turner and Hill (2008). So
therefore, the search has been on for a tool that allows robots to be simulated, programmed, ideally web-based, free and simple to use. Lately, a number of interesting tools have arisen. In this series of posts, I am going to look at experimenting with a few of them. In this post, starting by looking at VEXCode VR - available at https://vr.vex.com/.





VEXcode VR https://vr.vex.com/  from VEX Robotics (https://www.vexrobotics.com/) is a simulator and programming tool for their Scratch-like programming tool VEXCode - at the time of writing is free. If you can do Scratch this is a nice next stage, consisting of the simulator (playground) and the programming environment (see below and the video above.)




Playgrounds
These are the simulated environments you can select from, with a two camera-views; downward camera for overhead view and angled camera to give a 3D view (as shown in the video) via buttons on the bottom right hand side of the playground. Also you can toggle, using the third button on the right hand side, the ability to see the status of the various sensors. There are a number of different playgrounds to play with. In this post I am going to use two of them



Example 1: The Square
Using the Grid Map playground and angled camera. I wanted to start with a stand-by; getting the robot to move in a square. The robot moves forward for 30mm and then turns right 90 degrees; and this is repeated 4 times (see below)
So the commands are very scratch-like. I was impressed,  the 3D gave a clear view it in action, the commands were intuitive and (yes repeating myself) very easy to transfer to from Scratch.



Example 2: Playing with Sensors a bit
Now for more fun, getting it to react to the environment a bit; by changing the Playground to Castle Crasher you get an environment that has simulated blocks and red perimeter to interact with. As you would hope, there are sensing blocks including LeftBumper and RightBumper - no guesrss for what they do and DownEye which can detect the red line. The code is simple and shown below, based on detecting the block using the bumpers, move to the side and recheck if  (shown  below) is if a  block is in-front and if not go forward. If it finds the red line reverse back and rotates 180 degrees.



As a side project I wondered what would happen if you didn't put code in to detect the red line, how would it cope with falling off the surface; it simulates it quite well showing it falling off which quite fun. One mistake I made initially is accidentally selecting the wrong form of turning action rotating when it should have been a turn.


Overview so far...
If you can already use movement, sensing and control blocks in Scratch, you can do this. Has potential as a source of online activity's, especially as the time of writing in the UK we are 'social-distancing'. In their paper Turner and Hill (2008) also highlighted that robots are a difficult resource to manage for a large class; this kind of option allows simulation and programming of a robot to be tried out without actually having the robot. Most importantly it is fun.


Reference
Turner S and Hill G(2008) "Robots within the Teaching of Problem-Solving" ITALICS vol. 7 No. 1 June 2008 pp 108-119 ISSN 1473-7507 https://doi.org/10.11120/ital.2008.07010108




All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Friday, 24 June 2016

Playing with Smurf the Robot

Experimenting with an Aldebaran NAO robot - nicknamed Smurf - to get the robot to deliver a short welcome. The video below show the robot being simulated using the code in figure 1 and then shows the 'Smurf' actually carrying out the routine (the bottom video show this bit without the simulator). The only difference in the code on the simulator and the one running on the actual robot was an extra command was added at the start of the one on the robot to only start when the top of the head is tapped.



figure 1. Choregraphe program for the routine.






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with.

Remote Data Logging with V1 Microbit

In an earlier post  https://robotsandphysicalcomputing.blogspot.com/2024/08/microbit-v1-datalogging.html  a single microbit was used to log ...