Saturday, 24 August 2024

Remote Data Logging with V1 Microbit


In an earlier post https://robotsandphysicalcomputing.blogspot.com/2024/08/microbit-v1-datalogging.html a single microbit was used to log data. The main problem with this is often we want to sense things that are away from the computer ie remotely (though only a short distance away). The previous solution was attached to the computer to work. So one solution (and closer to solutions used in the 'real-world' is to separate the sensing part and receiving and processing into two different devices. So in this case two micro bits; one collecting and sending data, and the other receiving and transferring the data to the computer.

 

How?

So to play with this, we are going to extend our previous solution https://robotsandphysicalcomputing.blogspot.com/2024/08/microbit-v1-datalogging.html to be a remote monitoring system; and the easiest way to get started is with a solution that already exists and adapt it. So we going to use the solution found in https://microbit.org/projects/make-it-code-it/makecode-wireless-data-logger/ as our starting point (I would suggest literally use the makecode provided and then adapt it)

 

Advice: To avoid confusion when programming the microbits; as well as using separate makecode I would suggest programming them separately and only have one microbit at a time plugged in to the computer, to avoid confusion.

 

Microbit 1: Transmitter and sensors

For those that don’t know; microbit can send bits of data over short distances to and from microbits, details on how can be found at https://makecode.microbit.org/reference/radio. For this application we create a 'group' really just give the group a number, and then send our sensor values by radio.

We are going to keep sending the values, light and temperature, but not connect it to the computer directly (after programming) so the microbit in use will need a battery to power it but can be moved away from the computer - it's remote!

 

Here is the code:




 

 

Microbit 2: Reciever

This is the one we keep connected to the computer.

What we need to do is the following

-            Use the same radio group number as the transmitter

-            -Receive the radio signal and based on the name received (in this case x or y) allocated the value sent with the name to a variable.

-            -finally write the values to the screen (just as we did with the single microbit solution) and display them.

Figure 2 Reciever

 

 

Summary

Essentially we have split what we did in the first activity across two microbits connected by a radio link. Running the "Show data" button on the receiver MakeCode window; as before you should get something like this.

figure 3 Data logging remotely

Unless temperature is changing fairly quickly you might not see a great deal of change but hold the transmitter in your hand and you will see some changes. The light level can be made to change also be moving the transmitter so the LED array is pointed at bright screen

 

To save the data as a CSV file and view later in an spreadsheet; as before just use the blue download icon.

 

Some suggestions for improvement

-            When the two microbits are running without the graphs by just looking at the microbit, especially the transmitter, it is not clear that it is doing anything. So could something we added to fix this?

-            Play around with the sampling rate (ie. how often do you send data). So in the transmitter do what we did in the our single microbit solution, replace the forever loop with one that send data every so often.

Have fun.


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Thursday, 22 August 2024

Microbit V1 datalogging

Often we need applications that allow collection of data over time, for example temperature or light levels through the day. Allowing us potentially analyse the data for trends. The microbit is a fantastic tool, with some of these sensors already in place (e.g. light and temperature) or can be added to with extra sensors from add-on boards (such as Kitronik Air Quality and Environmental Board for micro:bit https://shop.pimoroni.com/products/kitronik-air-quality-and-environmental-board-for-micro-bit?variant=39475687227475 )

 

Datalogging with a V2 microbit is relatively easy all the details are available here: https://microbit.org/get-started/user-guide/data-logging/ to get started.

 

But what about the older V1 can it do it?

 

The answer is yes but it is a little more work and is generally a little more limited but still very worth while. In this post we are going to look at doing this.

 

In Figure 1 starting the process off in MakeCode (https://makecode.microbit.org/#editor) is shown below. Basic mechanism every 1sec

  • -            The light level is write from the microbit to the computer (via the USB) a value at a time – serially
  • -            Same thing will be done for temperature

That is it to start with.

Figure 1

Figure 2 shows where various elements are in menu. To get the Serial ones you will need to open up the Advanced menu and then Serial menu options (see figure 3)

Figure 2

 

Figure 3


Figure 4


Once everything is set up,  under the similar microbit (Figure 4) you will see another button “Show data Simulator” We can play with the microbit simulator to simulate light and temperature levels; click on new data simulation button and graphs starts rolling across the screen – drag the temperature and light levels on the microbit simulator and you see the graphs change – it is logging the simulate data – it works!

Now for the fun bit.

Click on Download we need to pair the computer and microbit. Figures 5 to 8 show the steps.

Figure 5


Figure 6


Figure 7


Figure 8





We can now take values from the real device. A new button should have appeared along with the “Show data Simulator” button; “Show data Device” Click on this button and instead of simulated data we get data from the  microbit (see figure 9)

Figure 9

 

So in figure 9, the top graph is light level the bottom is temperature taken from the room. Play with covering the sensor the LED grid and light levels change.

 

Collecting data is great, but we are taking it one step further logging the data over time and then sharing it To do this click on the blue download button above the graphs to save the logged data as a CSV file. Once it is in CSV format it is yours to play with in other tools such as spreadsheets.

 

 

Activity

-            how can this be more meaningful?

-            How do we do this with so a microbit can see data – some more remote monitoring (see https://microbit.org/projects/make-it-code-it/makecode-wireless-data-logger/ )

-            How could you do this in Python





All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Thursday, 4 April 2024

Top posts on this blog in March 2024

The Top 10 viewed post on this blog in March 2024. Covering areas such as small robots, augmented reality, Scratch programming, robots.






All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Wednesday, 26 July 2023

ChatGPT, Data Scientist - fitting it a bit

This is a second post about using ChatGPT to do some data analysis. In the first looked at using it to some basic statistics https://robotsandphysicalcomputing.blogspot.com/2023/07/chatgpt-data-scientist.html

This time, the CSV file looked at some publication data - The year of publication; the number of papers published in that year; and the number of citaations papers in that year got over time. The data was collected from Google Scholar for one person.



The prompt used (again using Code Interpreter plug-in

The transcript is shown below.

Analyse this data as a Data Analyst including linear and polynominal fitring
papervcitation.csv

Interestingly it spotted some errors in its own code and corrected them. 

I apologize for the inconvenience, there seems to have been a technical issue with the code execution. Let's try to load the dataset again.

Finished working
Hide work
python
# Load the new dataset data_new = pd.read_csv('/mnt/data/papervcitation.csv') # Display the first few rows of the dataset data_new.head()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[1], line 2
      1 # Load the new dataset
----> 2 data_new = pd.read_csv('/mnt/data/papervcitation.csv')
      4 # Display the first few rows of the dataset
      5 data_new.head()

NameError: name 'pd' is not defined

Apologies for the inconvenience. It seems there was a mistake in the code execution. We need to import the pandas library first. Let's correct that and proceed.

Finished working
Hide work
python
import pandas as pd # Load the new dataset data_new = pd.read_csv('/mnt/data/papervcitation.csv') # Display the first few rows of the dataset data_new.head()
RESULT
   year  citations  no of papers
0  1999          5             2
1  2000          0             0
2  2001         23             3
3  2002         76             2
4  2003         36             4

    This dataset contains three columns: 'year', 'citations', and 'no of papers'.

    • 'year': This seems to represent the year in which the papers were published or the citations were received.
    • 'citations': This likely represents the total number of citations received in each year.
    • 'no of papers': This probably represents the number of papers published in each year.

    Let's proceed with some basic data analysis. We'll start by checking the basic statistics of the dataset.

    Finished working
    Hide work
    python
    # Check basic statistics of the data data_new.describe()
    RESULT
                  year  citations  no of papers
    count    25.000000  25.000000     25.000000
    mean   2011.000000  23.560000      4.960000
    std       7.359801  24.627356      4.286413
    min    1999.000000   0.000000      0.000000
    25%    2005.000000   2.000000      2.000000
    50%    2011.000000  19.000000      4.000000
    75%    2017.000000  39.000000      7.000000
    max    2023.000000  81.000000     14.000000

    Here are some basic statistics for the 'year', 'citations', and 'no of papers' columns:

    • The data spans 25 years, from 1999 to 2023.
    • The average number of citations per year is approximately 23.56, with a standard deviation of about 24.63.
    • The number of citations received in a year ranges from 0 to 81.
    • The average number of papers published per year is approximately 4.96, with a standard deviation of about 4.29.
    • The number of papers published in a year ranges from 0 to 14.

    Next, let's visualize the trend of 'citations' and 'no of papers' over the years. We'll also perform linear and polynomial fitting to these trends.

    Finished working
    Hide work
    python
    import numpy as np from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures # Create a linear regression model linear_regressor = LinearRegression() # Fit the linear regression model to the data X = data_new['year'].values.reshape(-1,1) y_citations = data_new['citations'].values y_papers = data_new['no of papers'].values linear_regressor.fit(X, y_citations) y_citations_pred = linear_regressor.predict(X) linear_regressor.fit(X, y_papers) y_papers_pred = linear_regressor.predict(X) # Fit a polynomial of degree 3 to the data poly = PolynomialFeatures(degree=3) X_poly = poly.fit_transform(X) poly_regressor = LinearRegression() poly_regressor.fit(X_poly, y_citations) y_citations_poly_pred = poly_regressor.predict(X_poly) poly_regressor.fit(X_poly, y_papers) y_papers_poly_pred = poly_regressor.predict(X_poly) # Plot the data and the fitted lines plt.figure(figsize=(12,6)) # Plot the original data plt.scatter(data_new['year'], data_new['citations'], color = 'blue', label='Citations') plt.scatter(data_new['year'], data_new['no of papers'], color = 'red', label='No of Papers') # Plot the linear fits plt.plot(data_new['year'], y_citations_pred, color = 'blue', linestyle = '--', label='Citations Linear Fit') plt.plot(data_new['year'], y_papers_pred, color = 'red', linestyle = '--', label='No of Papers Linear Fit') # Plot the polynomial fits plt.plot(data_new['year'], y_citations_poly_pred, color = 'blue', linestyle = ':', label='Citations Polynomial Fit') plt.plot(data_new['year'], y_papers_poly_pred, color = 'red', linestyle = ':', label='No of Papers Polynomial Fit') plt.title('Citations and No of Papers over Years with Linear and Polynomial Fits') plt.xlabel('Year') plt.ylabel('Count') plt.legend() plt.grid(True) plt.show()
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    Cell In[4], line 31
         28 y_papers_poly_pred = poly_regressor.predict(X_poly)
         30 # Plot the data and the fitted lines
    ---> 31 plt.figure(figsize=(12,6))
         33 # Plot the original data
         34 plt.scatter(data_new['year'], data_new['citations'], color = 'blue', label='Citations')
    
    NameError: name 'plt' is not defined
    

    I apologize for the oversight. It seems I forgot to import the matplotlib library which is used for creating plots. Let's correct that and proceed.

    Finished working
    Hide work
    python
    import matplotlib.pyplot as plt # Plot the data and the fitted lines plt.figure(figsize=(12,6)) # Plot the original data plt.scatter(data_new['year'], data_new['citations'], color = 'blue', label='Citations') plt.scatter(data_new['year'], data_new['no of papers'], color = 'red', label='No of Papers') # Plot the linear fits plt.plot(data_new['year'], y_citations_pred, color = 'blue', linestyle = '--', label='Citations Linear Fit') plt.plot(data_new['year'], y_papers_pred, color = 'red', linestyle = '--', label='No of Papers Linear Fit') # Plot the polynomial fits plt.plot(data_new['year'], y_citations_poly_pred, color = 'blue', linestyle = ':', label='Citations Polynomial Fit') plt.plot(data_new['year'], y_papers_poly_pred, color = 'red', linestyle = ':', label='No of Papers Polynomial Fit') plt.title('Citations and No of Papers over Years with Linear and Polynomial Fits') plt.xlabel('Year') plt.ylabel('Count') plt.legend() plt.grid(True) plt.show()

    The plot above shows the original data for 'Citations' and 'No of Papers' along with the fitted linear and polynomial curves for each.

    The linear fit provides a simple model of the overall trend, while the polynomial fit can capture more complex patterns in the data. However, the polynomial fit can also be more susceptible to fluctuations in the data and may not generalize as well to future data.

    From the graph, it's evident that both the number of papers and citations have a general upward trend over the years. The number of citations seems to have a sharper increase compared to the number of papers, indicating that each paper might be getting more citations over the years. The polynomial fit captures the fluctuation in the data more accurately than the linear fit.

    If you have more specific questions or if there's another aspect of the data you'd like to explore, please let me know!


Thoughts: Not bad the linear fit is reasonable. Especially " indicating that each paper might be getting more citations over the years. " is a valid conclusion. The poly fitting worked but is not great technique in this case. 


All opinions in this blog are the Author's and should not in any way be seen as reflecting the views of any organisation the Author has any association with. Twitter @scottturneruon

Who wants to produce AI produced cartoon strips

Question: How easy is it produce a comic/cartoon using genetative AI? Let's start with  using ChatGPT4o to produce cartoons. The idea wa...