Skip to main content

Web AR Without the Faff: A Maker and Educator's Guide to AR.js Studio

Web AR Without the Faff: A Maker and Educator's Guide to AR.js Studio

Augmented Reality used to mean expensive apps, locked-down platforms, and a steep learning curve that put most teachers and makers off before they'd even started. That's changed. With A-Frame, AR.js, and the no-code AR.js Studio, you can build a working web-based AR experience in about the same time it takes to make a cup of tea — and share it with nothing more than a URL.

No app to install. No app store. Just a link.

What Is Web AR, and Why Should You Care?

AR.js Studio is an open-source platform for building augmented reality experiences that deploy straight to the web — meaning your audience won't need to download anything extra, and you don't need any coding knowledge to build them. Ar-js-org

That matters enormously in education and maker spaces. The barrier isn't usually enthusiasm — it's the first five minutes of setup. Web AR sidesteps that entirely.

The sweet spot for beginners is marker-based AR: print or display a special marker (think of it as a smarter QR code), point a phone at it, and a 3D object, image, or video appears on top of it. A science fair poster that spawns a rotating molecule. A Fab Lab project label that shows assembly steps. An art piece that layers video over print. All accessible from a single URL.

Step-by-Step: Your First AR Experience

Head to ar-js-org.github.io/studio/ and choose marker-based when prompted.

1. Choose your marker. Download the premade marker from the studio, print it out, or keep it open on a second screen for testing. You can upload a custom image if you want something more on-brand — the site includes guidance on what makes a good marker (high contrast, clear edges, avoid heavy symmetry).

2. Choose your content. Attach a 3D object, image, or video to the marker. For a first experiment, a 3D model is the most impressive. A great source of free, Creative Commons-licensed models is Sketchfab — institutions including the Cleveland Museum of Art and the Smithsonian have shared 3D scans of their collections there for free. Sussex University Download your chosen model and upload it directly into AR.js Studio. One practical tip: keep your file extension lowercase (.glb not .GLB) — the studio can be case-sensitive.

3. Publish. Export as a downloaded package to host yourself, or publish directly to GitHub Pages. If you don't have your own server, GitHub is the easier route — create a free account, log in from the studio, name your project, and hit Publish. Within a minute you'll have a live URL.

4. Test it. Open the URL in Safari on iPhone or Chrome on Android, allow camera access, and point your phone at the marker. The 3D model appears. Show someone nearby — their reaction is the best demo you'll ever give.

Ideas for Makers and Educators

Because the AR experience lives at a URL, it's as easy to share as a link in a VLE, on a label, or encoded into a QR code on a physical object. A few directions worth exploring:

STEM projects: Attach a 3D model of a cell, molecule, or planet to a printed card. Students point their phone at it and the model appears — rotating and explorable from every angle. This kind of interactive visual can reveal animated atomic structures or biological systems in ways that static diagrams simply can't. ATALUP

Maker and Fab Lab projects: Label your physical makes with a marker that, when scanned, shows a 3D assembly animation or a short build video. It turns a finished object into its own living documentation.

Art and creative projects: Pair a printed artwork with a video or 3D layer that appears when scanned — an instant interactive exhibition piece that needs no specialist hardware to run.

What's Next: Location-Based AR

Marker-based AR is a great starting point, but I'm keen to push further — and location-based AR is where I'm heading next. Rather than pointing your phone at a printed marker, location-based AR places 3D objects at specific GPS coordinates, so they appear anchored to real-world positions when you look through your camera. The possibilities for outdoor learning trails, campus tours, or site-specific maker projects are hard to ignore.

The AR.js ecosystem now has a dedicated project for this called LocAR.js. It's the recommended choice for location-based work — maintained more actively than the main repository, with fixes specifically for iOS on both Chrome and Safari. Ar-js-org It requires a bit more setup than AR.js Studio, so it's genuinely a step up — but that's part of the appeal. I'll be documenting what I find in a follow-up post, so watch this space.


Key links:

All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Comments

Popular posts from this blog

Robot Software

In the previous blog posts for this 'series' "It is a good time...."  Post 1  looked at the hardware unpinning some of this positive rise in robots; Post 2  looked at social robots; Post 3  looked at a collection of small robots; Post 4 looked at further examples of small robots Robots, such as the forthcoming Buddy and JIBO, will be based some established open sourceand other technologies. Jibo will be based around various technologies including Electron and JavaScript (for more details see:  http://blog.jibo.com/2015/07/29/jibo-making-development-readily-accessible-to-all-developers/ ). Buddy is expected to be developed around tools for Unity3d, Arduino and OpenCV, and support Python, C++, C#, Java and JavaScript (for more details see http://www.roboticstrends.com/article/customize_your_buddy_companion_robot_with_this_software_development_kit ).  This post contin ues with some of the software being used with the smaller robots.  A number ...

Speech Recognition in Scratch 3 - turning Hello into Bonjour!

The Raspberry Pi Foundation recently released a programming activity Alien Language , with support Dale from Machine Learning for Kids , that is a brilliant use of Scratch 3 - Speech Recognition to control a sprite in an alien language. Do the activity, and it is very much worth doing, and it will make sense! I  would also recommend going to the  machinelearningforkids.co.uk   site anyway it is full of exciting things to do (for example loads of activities  https://machinelearningforkids.co.uk/#!/worksheets  ) . Scratch 3 has lots of extensions that are accessible through the Extension button in the Scratch 3 editor (see below) which add new fun new blocks to play with. The critical thing for this post is  Machine Learning for Kids  have created a Scratch 3 template with their own extensions for Scratch 3 within it  https://machinelearningforkids.co.uk/scratch3/ . One of which is a Speech to Text extension (see below). You must use this one ...

Scratch and web-cams in Scratch 3

Scratch 3 was launched on 2nd January 2019, so I wanted to know would Webcams still work with Scratch 3 as it did with Scratch 2. For example, in a previous post  Scratch, webcams, cats and explosions  the cat (Scratch) moved across the screen and a button burst when the object moved in the camera onto it.  Can the same thing be done in Scratch 3? The short answer is yes, but it is done slightly differently. The first change the video capture is not there in the blocks automatically; but is an extension that needs to be added. First, you need to add the extension blocks for video sensing. Go to the little icon at the bottom left of the screen (as shown below) this takes you to the extensions menu. Next, find the Video Sensing option and selected. The webcam, if enabled, with start automatically. A video sensing set of blocks is now in the list of block options.  The rest is very similar to doing this in Scratch 2. Moving ...