The Raspberry Pi Foundation recently released a programming activity Alien Language, with support Dale from Machine Learning for Kids, that is a brilliant use of Scratch 3 - Speech Recognition to control a sprite in an alien language. Do the activity, and it is very much worth doing, and it will make sense! I would also recommend going to the machinelearningforkids.co.uksite anyway it is full of exciting things to do (for example loads of activities https://machinelearningforkids.co.uk/#!/worksheets ). Scratch 3 has lots of extensions that are accessible through the Extension button in the Scratch 3 editor (see below) which add new fun new blocks to play with.
The critical thing for this post is Machine Learning for Kids have created a Scratch 3 template with their own extensions for Scratch 3 within it https://machinelearningforkids.co.uk/scratch3/. One of which is a Speech to Text extension (see below). You must use this one not the standard Scratch 3.
My idea is to can I set it to react one way when I say "hello"; then say "french" and then say "hello" it says "Bonjour". Two other extensions are needed along with the Speech to Text one - one for speech to text and the translate shown below.
Ok, so to the fun bit. The listen and wait, and when I hear blocks are the key new blocks, and they do what they say. The three sets of the code are ones I used for this activity.
Thank you to Machine Learning for Kids for creating such a brilliant Scratch extension - this is well worth a play with.
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
In an earlier post, Build a Disco Cube:bit that reacts to music; the vibrations of music, makes the cube sitting on a speaker with the volume pushed to 11 (just to test it of course) react to the music. The accelerometers values in the micro:bit, in the three axis, are feedback to change the neopixels colour. Simple but good fun.
With some very minor (and I do mean minor) changes it works on the Kitronik's Game Zap - eight pixels are altered at a time instead of five but apart from that nothing more. The code in python is shown below:
from microbit import *
import neopixel, random
np = neopixel.NeoPixel(pin0, 64)
while True:
for pxl in range (3,64, 8):
rd=int(abs(accelerometer.get_x())/20)
gr=int(abs(accelerometer.get_y())/20)
bl=int(abs(accelerometer.get_z())/20)
np[pxl] = (rd, gr, 0)
np[pxl+1] = (rd, gr, 0)
np[pxl-1] = (rd, gr, 0)
np[pxl+2] = (rd, gr, 0)
np[pxl+3] = (0, gr, rd)
np[pxl-2] = (0, gr, rd)
np[pxl-3] = (rd, 0, 0)
np[pxl+4] = (0, gr,0)
np.show()
I was impressed with a few tweaks it worked! Please feel to share and copy, if this useful to you please share in the comments.
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
To start to understand the basic idea behind how a Neural Network works by building the most basic unit a simple neuron using a spreadsheet.
Objectives:
- To see we can start building machine learning tools in commonly used software applications such as a spreadsheet without a lot of programming (though programming is good fun);
- To see if we change a few values and it changes what the neuron does.
What might be helpful to know before we start?
Don't worry the video will mention these and show you them in action.
In the press we see a lot of talk about Artificial Intelligence and Machine Learning and one of these often mention is Deep Learning which is a form of something called a Neural Network. One way to think of a Neural Network is in our brains we lots of processing units called neurones, which are connected together to form a massive network of neurones, which is a neural network. What computer scientists have done is taken the idea and used it create an artificial version so we have a tool that learns.
The simplest unit in these networks is the neuron, and we are going to build a simple artificial neuron together. It works by doing two things
taking the inputs and multipling them with a value, then adding these multipled inputs to get a single number;
we take this single number and use it to decided what the neurone's output is.
We can do a lot with a neuron, including building logical operations. In this activity, we are going to look at two basic logic operations the AND (when all the inputs are TRUE (in our case today 1), the output of the neuron is TRUE), the other is OR (when one of the inputs are TRUE, the output is TRUE).
Spreadsheets cells have some cool features
if instead of putting A2 into a formula in a cell, we put $A$2; if we then copied that cell's content and pasted it another cell the value stored in A2 will always be used, otherwise pasting changes the cell that is used.
next is IF, we can build a test into our system =IF(whatever the test is, what happens if the test is TRUE, what happens if the test is not true) .
Task 1: Which Spreadsheet to use and setting up.
Google sheets, Excel can all be used; the process is the same.
Please watch this video first which will take you through the activity it includes pauses to allow you stop the video to type in the things needed. So watch the whole video first; repeat the video if you need help, stopping the video when you need to.
Copy the spreadsheet above.
At the end of this task we should have
The spreadsheet started;
All the columns labelled
The inputs set-up
Set up some initial values called weights.
Task 2 Adding the rules
In Cell H2 enter the following =$E$2*A2+$F$2*B2+$D$2. What is happening is weight 1 is multiped with input 1, weight 2 with input 2 and these are then added together with the bias. This is the weighted sum
The dollar signs set the formula so that it always uses those values such as E2. Now if we copy this cell and paste it into the three cells H3,H4 and H4 the formula is copied and its output changes based on the inputs and the weights.
Final stage, in J2 add the following =IF(H2>=0,1,0) . What this says is if the weighted sum is greater or equal to 0 then the output of the neuron is 1 (TRUE) otherwise 0 (FALSE). We are done we have our neuron.
At the end of this task we should have
The inputs and weights multiplied together;
Added the multiplied weighted inputs together to create a single number;
Created the rules that say the output is based the single number;
See we have an OR gate.
Task 3
At the end of the task we should have seen what can happen when changes the weights - in this case OR becomes AND. To this change the bias value to -2.
Have a play with the weights. Do they always have to be whole numbers? What other values of weights work to produce an AND. You only need to alter bias, weight 1 and weight 2.
Where now? Activities to do later if you want.
If you want to take this further these videos might help
Follow on Activity 1: Training a Single Neuron in a spreadsheet
Follow on Activity 2: Combining three neurone to make a Neural Network in a Spreadsheet
Follow on Activity 3: Can we build a Simple Neural Network using BBC Microbits?
Follow on Activity 4: Why do we need to learn about Machine Learning
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
In the previous blog posts for this 'series' "It is a good time...." Post 1 looked at the hardware unpinning some
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
For a few years, I have been a fan of Aframe and AR.js - these are fantastic tools for creating web-based Virtual and Augmented Reality.
Now AR.js has just got easier - no coding need with the Beta version of AR.js Studio including using markers (see this previous post for more details) and the focus of this post, geo-located or markerless AR.
You will then be asked for the longitude and latitude on where you want your AR to be located, up to 10 locations can be used - I have only used one to trial it. If you don't know these co-ordinates they have included a link to a site https://www.latlong.net/ (see below) that will give you these and you can then transfer them into AR.js Studio that is the geo location bit done. Now for the thing at the location.
So for the first experiment, I going to use a free 3D model from https://sketchfab.com/3d-models/duck-6e039c6c606c4c26a1359514352629fd produced by likangning93 and released under a creative commons licence on Sketchfab. It is as simple as clicking upload file and browsing. The last stage is publishing it on GitHub and you getting a URL or downloading the files and which you can later add to a server - both really just a click option.
Some interesting things I noticed it is relatively easy to do it, and I think the resources downloaded through AR.js Studio could be a great start on a more complex project. Initially, I didn't have location turned on my phone (obvious I know) but I could still see the object when I looked down - so that is great feature to have when you are trying out markerless AR to see what it could look like.
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
The start screen above gives you the option of location-based or Marker-based projects, I am selecting marker-based and then pressed Start building.
I am going to use the premade marker but you can upload your own )there is a guide to what makes a good marker). The premade marker you can download from the site using the download marker link underneath the marker. Apart from that, you don't have to do anything else to select the marker.
Now you choose whether you want 3D object, image or video. So for this experiment, I going to use a free 3D model from https://sketchfab.com/3d-models/duck-6e039c6c606c4c26a1359514352629fd produced by likangning93 and released under a creative commons licence on Sketchfab. It is as simple as clicking upload file and browsing.
Last stage is exporting the project. Two options
- Published to Github
- Download package
My advice is, if you don't have your own web-server, get yourself a Github account and choose that option, and you just log-in to your account. You will need to give the project a name and then push Publish. Depending on your internet connection it can take a few seconds to a minute or so, but it is worth the wait.
So to test I am typing this URL in a Safari browser (Chrome can play up) on my phone and allow access to my camera (see below or try it for yourself which is more fun).
This Beta version is very good, no coding needed by the user and easy steps to an AR. At the time of writing the only slight issue was you need to ensure that the file extensions were not capitalised but other than it is a great tool for produce a single AR example. I need to try the location-based version next.
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon
With the sad news that Anki is shutting down ( https://www.vox.com/2019/4/29/18522966/anki-robot-cozmo-staff-layoffs-robotics-toys-boris-sof...
All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon