Showing posts with label virtual. Show all posts
Showing posts with label virtual. Show all posts

Saturday, 13 May 2023

Programming Robots Virtually 4: Preview of Edbot Studio

In previous post I looked at a few on-line robot simulators (see links below)

A recent addition to these is the Edbot Studio Virtual Playground https://studio.ed.bot/;  a preview of simulation technology for Robots in Schools Ltd Edbot robots. 

Two Edbot robots are shown in a gym; you can select actions for the robots to carry out, including Gangam-style dancing and Head Stands.







This is really a preview of the tech, rather than a programming option - at the moment. Robots in Schools Ltd, who make the Edbots, say the Virtual Playground will be part of their Edbot Studio a browser-based Environment to allow both coding in Scratch, Python and JavaScript, of both virtual and real robots.

I am really curious to see the full Edbot Studio in action when it is released, but for the moment getting virtual robots to dance and kick is still really good fun.


The physical robots -EdBot are available from https://ed.bot/







All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Sunday, 21 February 2021

Escape the Maze with a VR robot - Vex VR




You don't need to buy a robot to get programming a robot, now there are a range of free and relatively simple to start with robot simulators to play with. Three examples are listed below:

It is the last one of these (https://www.vexrobotics.com/vexcode-vr) that is the focus of this post and return to hit, after an earlier discussion in https://robotsandphysicalcomputing.blogspot.com/2020/04/programming-robots-virtually-1-vexcode.html 

Two of the nice things about the package, apart from being free, are it uses a Scratch-like programming language and it provides a 3D environment and models - playgrounds for a number of scenarios. 

So in this post, I will be discussing playing, or rather starting to play with the robot navigating a 3D maze (see the figure above). A feature I particularly like is you can change the views from an overhead view to an onboard version or one that seems to follow the robot.





So as I starting point I programmed it to essentially bounce along the walls keeping the wall on it's right and stopping when the downward 'eye' detects red on the floor for the end of the maze. The sensors include left and right bumper sensors; along with two sensors for detecting colours one facing forward and one down. The code I use is shown below:




It took 8 minutes to solve the maze - which is slow. I would be interested to see the solutions of others being shared. As a simulated robot programming system this is great fun and challenging, I would recommend having a play iot is free and available at https://www.vexrobotics.com/vexcode-vr. I want to have a go with the Python version to replicate or better the solution above (start it as a text project rather than a blocks project when starting a new project).






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Tuesday, 27 February 2018

WebVR playtime 2: video, 360 video and objects

This is going to be a short series of articles about some experiments with WebVR Web based Virtual Reality - in this case based on the wonderful A-Frame (https://aframe.io) . In the first post WebVR playtime 1: Basics of setting up, images and rotating blocksI looked at setting up a scene and then rotating an object.

In this post, I going to recap the basics, then look at adding video, 360 degree video, and models developed elsewhere.


1. The approach and setting up

I chose to use A-Frame (https://aframe.io) inside Thimble (https://thimble.mozilla.org ); Thimble was selected for four reasons it is an online editor,  simple to use, it is free and you see the preview immediately. In Thimble though try to keep the image or video file sizes small.

You can pretty much treat it as HTML, after you have added the script file shown in bold.
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <script src="https://aframe.io/releases/0.7.0/aframe.min.js"></script>
  </head>
  <body>
    <a-scene>

    </a-scene>
  </body>

</html>
The items to be add all go between <a-scene> and </a-scene>:
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
For example
    <a-scene>
      <a-text value="Hello" color="black" position="0 1.8 0.5" width="10"></a-text>
      <a-sky color="orange" ></a-sky>
    </a-scene>

The video below shows setting up and adding a box to the scene.




2. Adding video
Actual in some ways it as easy to add video as adding an image, at it's simplest adding src="" with either the URL or relative filename in the speech marks can be used for both images and video. Alternatively using <a-video src=""></a-video> combination with again the filename or URL between speech marks adds a block and pastes the video on top. The video below shows a worked example of these two approaches




3. 360 degree video.
A-Frame allows 360 degree to be incorporated into the scene using the <a-videosphere> tag. The video below shows a worked example of this. The video below shows another worked example.




4. 3D objects and Assets
We can also add 3D models that others have developed into our scene. In the video below a Penguin, defined externally using .obj for the model and .mtl for the material, is loaded into the scene.




To read more go to https://aframe.io/docs/0.7.0/introduction/ 






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Who wants to produce AI produced cartoon strips

Question: How easy is it produce a comic/cartoon using genetative AI? Let's start with  using ChatGPT4o to produce cartoons. The idea wa...