Sunday, 24 June 2018

Computing staff at Women into Engineering Day 19th June 2019


Some of the activities were lead by Computing staff at the University of Northampton's Women into Engineering Day on the 19th June 2019. Three activities were lead by the team
- Meet Red the Robot
-Blinking LEDs
- Web programming
Facilitated by Scott Turner, Liz Coulter-Smith and Iain Douglas.

Below are some of the tweets from the day













To hear To hear Red the Robot talk and dance a bit go to https://www.youtube.com/watch?v=cvF0Q8OVV9g


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Thursday, 10 May 2018

MSc meets Micro:Bit

I have recently been teaching a module on Internet Programming on a MSc Computing programme (see related links), and was looking for a way to introduce a little bit of physical computing to finish of the module - micro:bits offer a route.

So a bit of context; most of the students on the module had first degrees in either networking or software engineering; so before they start the module they are competent in programming with Javascript, HTML, CSS and PHP. Therefore the module looked to develop new areas such as introductory blockchainvirtual reality via the web (e.g. WebVR), using social media sources; but lastly looking at physical computing leading to an insight into the Internet of Things (IoT). As part of this last topic gaining some experience of programming and very simple networking was looked at using the micro:bit.

An activity was produced where:

  • they, in pairs, initially replicate some code and work out how it worked;
  • they then took the code and experimented with their own ideas.
In all cases they had to produce something that allowed doing something on one micro:bit, caused another micro:bit to do something in response.



Initially, javascript blocks (as above) were used and some students stuck with the graphical blocks, others moved into the text-based version. As far as the activity went it didn't matter; the main goals were to see the programming of a physical device via a web interface; to break a little mystique that it is as ways much harder to program physical devices and to get a bit of very simple networking going on.

Many of the students, started to investigate getting sounds to play on headphones and getting one micro:bit to trigger the other to play. One group went and started playing with python. 

Reflection bit - If I had similar, competent group again I would start this earlier; the level of engagement seemed high and the activities could then start developing towards IoT. Though, I admit to a bias for physical computing, it is appropriate in HE teaching; even using tools primarily designed for schools like the micro:bit.


Related Links
MSc Computing
MSc Computing (Computer Network Engineering)
MSc Computing (Software Engineering)



All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Friday, 20 April 2018

Summary of Robots at BCS Northampton

On the 17th April 2018 I had the honour of presenting a public talk on robots for the Northampton Branch of the Britsh Computer Society (BCS). This post aims to summarise the session.

The session was really from a personal perspective and journey, covering where I think robots in home and schools are going, and an overview of some of the projects I have been involved. First, part was the presentation - the slides are shown below.




The videos used in the presentations are shown below. The first video is an introduction and welcome from Red the Nao robot.





Next video shows a programmed Cozmo, using Anki's graphical programming language.




Second section of the session was playing with the robots. Red the Nao, an Anki Cozmo and an UBtech Alpha2 and having a play with a Crumble-based junkbots.  Crumble junkbots were used on PC and Raspberry Pi via Pi-top CEED.
Red (at the back), alpha 2 (middle) and Cozmo (front)

Crumble controller from Redfern Electronics

Crumble as part of a junkbot.

Highlights of the evening were Red going for a walk 'hand-in-hand' with one of the audience and Cozmo chatting away; as always (and rightly) the stars of the talk are the robots. 


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Tuesday, 3 April 2018

How to produce a Microbit neural network

This is really part two of a set of post in response to a question from Carl Simmons (@Activ8Thinking) concerning building a micro:bit simple neuron. In the previous post a single neuron was produced. This post looks at producing a network of neurons, ie. neural network; looking to solve the problem that a single neuron can't solve, making an Exclusive OR gate (XOR)


1. Quick Overview
1.1 The neuron itself

  • Inputs are going to be binary
  • Weighted sum is bias+W1*input1+w2*input2
  • If weighted sum>=0 then the output is True (T on the LEDs) or '1'
  • If weighted sum<0 then the output is False (F on the LEDs) or '0'
1.2 The XOR
Essentially for the two input case if the two inputs are different then the output is True.

The figure below shows the arrangement of the connections; pin 2 is the output of the neurons. The two micro:bits/neurons on the left of the picture taking in the two inputs, the same inputs go to these two neurons; the output from these neurons are the two inputs to the output neuron on the right.




figure 1


The micro:bit objects used in Figure 1 were produced using the micro:bit Frtzing diagram available at https://github.com/microbit-foundation/dev-docs/issues/36 thanks to David Whale (@whalleygeek ) for these.




2. Neuron 1
This is the top left neurone in figure 1. This neurone is set to produce an output of TRUE (pin 2 going high) when the first input goes low and the second input goes high. The code for it is shown below.





3. Neuron 2
This is the bottom left neuron in figure 1. This neurone is set to produce an output of TRUE (pin 2 going high) when the first input goes high and the second input goes low. The code for it is shown below.





4. Output Neuron
Neuron 1
This is the right-hand neurone in figure 1. This neurone is set to produce an output of TRUE (pin 2 going high) when either inputs (outputs from neurons 1 and 2) goes high - in other words acting as an OR gate . The code for it is shown below. 

The overall effect is when the two inputs to the network are high/TRUE then the output of the network (this neuron) is TRUE.




5. In Action
The wiring is messy but the effect is possible to see in these images. The top neuron is the output neuron.
figure 2: inputs to the network (input 1 low and input 2 high)
Figure 3: inputs to the network (input 1 high and input 2 low)

figure 4: inputs to the network (both inputs the same)
6. Room for expansion
The neurons were 'trained' in this case by selecting the weights by hand, an improvement would be to get them to learn. How to do this on a micro:bit takes a bit more thinking about, but I would be interested in seeing how others solve that problem.




All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Friday, 2 March 2018

Microbit Neuron - producing a single neuron using a microbit

This post is in response to a question from Carl Simmons (@Activ8Thinking) about has anyone built a microbit simple neuron.


Quick Overview

  • Inputs are going to be binary
  • Weighted sum is bias+W1*input1+w2*input2
  • If weighted sum>=0 then the output is True (T on the LEDs) or '1'
  • If weighted sum<0 then the output is False (F on the LEDs) or '0'



First attempt - A simple gate using the buttons A and B
So first attempt uses the A and B buttons on the Microbit as the two inputs and it produces T for true and F for false on the LEDs. So the weights produce an AND if the bias is changed from -2 to -1 you get an OR.





More Physical Solution for Single Neuron

So in this case the buttons are removed and P0 and P1 formed the inputs the weights are the same as in the previous example with the bias of -2 being used to produce a AND gate. Programming-wise this is a simpler solution than the previous one, no converting button presses into inputs.




Figures below show the 'neuron' in action.

First, one shows the case when both inputs are '0' ie. not connected to 3v connection. The output is False (F on the LEDs)


This figure shows when only one input is '1', the output is False.



Finally what happens when both inputs are '1', the output goes to True (T on the LEDs).




Where next?
Adapting the code so it produces a digital output and then combining them into a small network to solve a problem that a single neuron can't do the Exclusive OR (XOR).



All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Tuesday, 27 February 2018

WebVR playtime 2: video, 360 video and objects

This is going to be a short series of articles about some experiments with WebVR Web based Virtual Reality - in this case based on the wonderful A-Frame (https://aframe.io) . In the first post WebVR playtime 1: Basics of setting up, images and rotating blocksI looked at setting up a scene and then rotating an object.

In this post, I going to recap the basics, then look at adding video, 360 degree video, and models developed elsewhere.


1. The approach and setting up

I chose to use A-Frame (https://aframe.io) inside Thimble (https://thimble.mozilla.org ); Thimble was selected for four reasons it is an online editor,  simple to use, it is free and you see the preview immediately. In Thimble though try to keep the image or video file sizes small.

You can pretty much treat it as HTML, after you have added the script file shown in bold.
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script>
  </head>
  <body>
    <a-scene>

    </a-scene>
  </body>

</html>
The items to be add all go between <a-scene> and </a-scene>:
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
For example
    <a-scene>
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
    </a-scene>

The video below shows setting up and adding a box to the scene.




2. Adding video
Actual in some ways it as easy to add video as adding an image, at it's simplest adding src="" with either the URL or relative filename in the speech marks can be used for both images and video. Alternatively using <a-video src=""></a-video> combination with again the filename or URL between speech marks adds a block and pastes the video on top. The video below shows a worked example of these two approaches




3. 360 degree video.
A-Frame allows 360 degree to be incorporated into the scene using the <a-videosphere> tag. The video below shows a worked example of this. The video below shows another worked example.




4. 3D objects and Assets
We can also add 3D models that others have developed into our scene. In the video below a Penguin, defined externally using .obj for the model and .mtl for the material, is loaded into the scene.




To read more go to https://aframe.io/docs/0.7.0/introduction/ 






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Sunday, 25 February 2018

WebVR playtime 1: Basics of setting up, images and rotating blocks.

This is going to be a short series of articles about some experiments with WebVR Web based Virtual Reality - in this case based on the wonderful A-Frame (https://aframe.io) . Ok, a bit of context, I have been working with some MSc students on this area and we have been exploring this area together - I love learning from and with my students.

Firstly, it is great fun and nowhere near as hard as I thought it was going to be when I first started. 

1. The approach
My approach is to use A-Frame (https://aframe.io) inside Thimble (https://thimble.mozilla.org ). Thimble was selected for four reasons it is an online editor,  simple to use, it is free and you see the preview immediately. Its main downside is the size of images and videos has to be relatively small and not too many of them.

2. How easy is it?
You can pretty much treat it as HTML, after you have added the script file shown in bold.
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script>
  </head>
  <body>
    <a-scene>

    </a-scene>
  </body>

</html>
The items to be add all go between <a-scene> and </a-scene> with a parent-child relationship; for example A block of text saying Hello in black and making the sky orange are all 'children' of <a-sceme>:
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
For example
    <a-scene>
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
    </a-scene>

The video below shows setting up and adding a box to the scene.



This next video takes this a little further by adding rotation to an object.



In this video mapping an image to an object and changing camera position is looked at.




In later posts some further ideas will be explored. 

To read more go to https://aframe.io/docs/0.7.0/introduction/ 





All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Who wants to produce AI produced cartoon strips

Question: How easy is it produce a comic/cartoon using genetative AI? Let's start with  using ChatGPT4o to produce cartoons. The idea wa...