My Commute Carbon Footprint

(A quick back-of-the-envelope calculation)

I’d like to know how much CO2 I’m putting into the atmosphere from my commute.

I recall that I fill up about twice a month, and the petrol tank usually takes about 40 litres each time. I could work out the MPG, but this is a quick estimate. It’ll do for now.

Burning Petrol releases about 2.31 kg per litre (according to
https://people.exeter.ac.uk/TWDavies/energy_conversion/Calculation%20of%20CO2%20emissions%20from%20fuels.htm)

Therefore, in a month, 2 tank-fulls, or 80 litres burns into…

80 × 2.31 = 184.8 kg of CO2

Taking holidays into account, let’s assume I do that commute for 11 months of the year, which means my commute looks like

184.8 × 11 = 2032.8 kg, or just over 2 tonnes of CO2

There we go. 2 tonnes. The next challenge is to try to reduce it.

According to this article at The Guardian, manufacturing a small to medium sized car might produce 6-17 tonnes of CO2, which I need to factor in when deciding whether to replace the car

Note: My internet-based research said a car probably emits 4.6 tonnes, according to https://www.epa.gov/greenvehicles/greenhouse-gas-emissions-typical-passenger-vehicle. My car does less than the average mileage, and I get better fuel economy than the 22mpg stated in the epa.gov article, so I’m comfortable with the figures I’ve calculated.

Posted in Uncategorized | Tagged , | Comments Off on My Commute Carbon Footprint

AWS Logging (or “why am I being charged for free-tier S3?”)

I have an AWS account. I created it less than a year ago, so I get some free stuff with the account. In the billing dashboard, I was somewhat concerned to see this:

It is only 5 days to the end of the month, but I thought I hadn’t uploaded that many objects to S3, and wondered where this was coming from.

A few weeks ago, I was exploring AWS CloudTrail, which logs every API call, and I had set it up to store data for some later analysis. Now seemed like a good time to do so. I also wanted to explore AWS Athena, which is AWS’ implementation of Presto as a service, and allows you to run SQL queries directly on your S3 content (without having to import it into expensive Redshift, or elephantine EMR).

On the Cloud Trail interface is a handy link to Athena: it’s like they WANT you to use it:

Clicking that link starts a simple wizard that creates the “External Table” in Athena pointing to the S3 bucket where your CloudTrail logs are stored. This table tells Athena how to find the CloudTrail data, how to interpret it, how to read it, etc. The syntax is familiar to anyone with a background in SQL, and the details are familiar if you have dug into the depths of Apache Hive (SQL-on-Hadoop).

Within a few minutes, and following some tutorials, I had a query running that counts the number of API calls on an S3 bucket. Here is the code:

 SELECT count(useragent) as QTY, eventname, sourceipaddress, awsregion
FROM my_cloudtrail_logs
WHERE
requestparameters LIKE '%my-cloudtrail-logs-bucket-%'
GROUP BY sourceipaddress, eventname, awsregion
ORDER BY QTY DESC
LIMIT 5;

This code gives the top 5 combinations of event-name, sourceIP and region. Here are the results:

We learn several things from this exercise:

  • The culprit, the reason I went over my Free Tier limit for S3, is in fact CloudTrail. It has put over 13,000 objects into an S3 bucket.
  • I should also be concerned with the top row: Athena doing 173,000 “GetObject” calls to parse my data. That count increased to 202,989 the next time I ran the script. Row 3 has similar behaviour.
  • The source IP addresses in lines 4 and 5 are my own. No concerns there.

The big money question: how much did the Athena script cost to run?

  • Athena charges: $5.00 per TB scanned. The Athena console says it scanned 35 Mb, so the Athena cost is $0.00017
  • S3 charges: $0.0004 per 1000 GET requests, which is about $0.08.
  • Incidentally, those PUT requests from CloudTrail cost a further $0.07.

Closer inspection of the log files suggests that CloudTrail is logging API calls made by CloudTrail when it writes to S3, which it writes to S3 … CloudTrail added over 170 new objects the next time I ran this query! Time to modify my configuration of CloudTrail.

Justin – 26 March 2019

Posted in Computing, Programming, System administration | Tagged , | Comments Off on AWS Logging (or “why am I being charged for free-tier S3?”)

A 3D printer diary – printing another box

Following the 3D printer’s instructions online and in the book that came with it, I heated the print head up to working temperature, then carefully used a pair of tweezers to tweeze any filament off the brass nozzle. Doing this a few times removed quite a bit of the stringy stuff.

Then I tried another print job, and here is the result, with a Londoner for scale.

Some lessons were learned from this print job, too. The short version is as follows

  • Don’t start a print job at 8pm. You’ll get a late night.
  • My Cura settings were wrong.
  • The stringy version last time was caused by a partly-blocked nozzle.
  • Supports are a great idea.

The quality of this box is much better. I think this is because the filament was coming through the nozzle at the right speed. Here is an action shot:

In the picture above, it looks like the supports are quite solid. They were easy to break off with fingers. The photo below shows what the supports are like and also shows off the clean base of the box. Compare that with the previous one. And the one before that.

There were a few rough spots on the base and on the inside of the container, which just need a bit of gentle filing to correct. It’s sturdy, and as functionally good as the original.

As mentioned before, I’m using Cura to slice the model, because Cura can export to .g3drem files. I corrected my Cura settings by reading the GitHub page of the guy who wrote the .g3drem plugin. Why is reading the documentation the LAST thing we ever do?

Posted in 3D printer, Lego | Tagged , | Comments Off on A 3D printer diary – printing another box

A 3D printer diary – printing a box

3D printing takes AGES! Even something as simple as the small container shown below takes four hours!

The 3D-printed version is on the left.
The original (injection-moulded) is on the right.

A lot of lessons were learned from this print job. The short version is as follows:

  • Don’t start a print job at 8pm. You’ll get a late night.
  • Supports are a great idea.
  • The raft underneath was a waste of time on this model.
  • The box is flimsy. See below for why.
  • My Cura settings were wrong.
  • The nozzle was still slightly blocked (as I found out a few days later).

3D printing involves layering tiny amounts of hot plastic, in layers of 0.1mm (or more, or less, depending on the printer). The software takes a 3D file (like a .STL file or a .OBJ file) and “Slices” it into hundreds of layers. This takes time. The white box above took nearly four hours to print, and I got to bed at midnight.

The observant will notice that the top of the box is a different colour. With 12 minutes to go, the white filament ran out. I cheekily pushed the end of a yellow length of filament into the top of the filament feeder, whilst it was still running, guesstimating the correct time to do so. I’m sure one is supposed to pause the model whilst doing this, but it worked for me this time.

The resultant box has weak sides, and feels flimsy. It feels like it’s made of thin paper, and the quality of the sides is terrible. There are two possible issues: one is that my settings in the Cuda software are incorrect. I’m using Cuda to add supports and the raft – because the official Digilab software won’t export in the correct format. the other issue (as I found out) was that the nozzle was still blocked.

The underneath of the box is much better than last time. Apart from a few stringy bits, it’s relatively clean. The supports were very thin, as they are meant to be, and broke off easily.

I’m not convinced that a raft was really necessary. It took 31 minutes to print the raft, even before starting to print the supports and the model. I felt that the raft was a waste of time in this case. There are good examples where a raft is important, particularly for models which don’t have a flat base.

Next, we will try giving the nozzle a really good clean, and try again.

Posted in 3D printer | Tagged , | Comments Off on A 3D printer diary – printing a box

A 3D printer diary – clogged nozzle

After printing a stringy mess, the nozzle of my 3D20 is covered in goo (filament). At room temperature, it’s a nice hard crusty smooth goo. The instructions explain how to clean the nozzle. There’s a nice video (with jolly music) that shows how to do it too:

  • Put the printer in heat mode to get the nozzle up to the correct temperature.
  • Don’t touch the nozzle – it’s 220 degrees C.
  • Remove filament from the top of the print head.
  • Don’t touch the nozzle. It’s hot.
  • Let the filament ooze through the nozzle for a bit.
  • Did we say the nozzle is hot?
  • Poke the cleaning tool in at the top and push out any filament.

I did this, and the cleaning tool went quite deep into the print head, almost to the nozzle. Then I pulled the cleaning tool out, and as I did so, I realised something had gone wrong. It felt like filament had been pulled out with the cleaning tool. I tried poking the cleaning tool in again, and it wouldn’t go. I guessed that the filament had solidified on the top of the metal part of the print head.

Schematic showing filament (in green for this diagram)

As expected, I’m not the first to experience problems like this. Here is a video, (with no sound), showing how to dismantle the print feeder, to get at the print head. Be careful not to loose the nylon spacers behind the heat-sink, behind the fan.

Photo showing my blocked print head (looking down on the steel print head). The blob of filament is white.

With filament back in the machine, a print job could be started. The results are shown below, and I’ll talk more about it in the next post.

3D printed version on the left in white.
Original (injection-moulded ABS) on the right in yellow.

Posted in 3D printer | Tagged , | Comments Off on A 3D printer diary – clogged nozzle

A 3D printer diary – supports

The SD card with the correct software was missing when we bought our Dremel 3D20 (it was a bargain). But I found the “Dremel Idea Builder” software and a firmware update on their website, and that allows me to load STL files, and send them to the printer via USB cable.

Unfortunately, the “Idea Builder” software has no option for adding supports to a model. Supports are needed so that the nozzle doesn’t print filament over an empty space.

However, Dremel now offer a new Slicing software called “DigiLab Slicer“. This is easy to use, and has options to add supports, rafts, skirts, change the print quality, and do the slicing. It saves in .gcode format. It is based on the open-source Cura software.

This is all good news … except that Idea Builder doesn’t like .gcode files. It needs its own special format of .g3drem. And DigiLab Slicer doesn’t save in .g3drem format. It would appear that I’m not the only person with this problem. Dr. Peter Falkingham, university lecturer in vertebrate biology prints models of animal bones for use in his lectures. And he has had the same problems: most slicing software can create .gcode files. but how do you create a .g3drem file?

Update 15 March – I emailed Dr. Falkingham, who clarified that the latest firmware allows the 3D20 printer to read .gcode files off the SD card. The problem I experienced exists when printing from the “Idea Builder” software via USB cable: “Idea Builder” does not read .gcode files.

Further online searching revealed that others had the same problem too. One guy wrote a plugin for Cura that saves to g3drem format. A quick installation of Cura, then a visit to the Cura Marketplace (from within the Cura software), and the plugin is installed. It isn’t supported by Cura, it comes with no warranty, except that your printer might catch fire, but I’m happy with that.

So now I can create supports for the model from the STL file, save it as a .g3drem, and load that into Idea Builder to send it to the printer.

Except that the nozzle is blocked. That’s a job for another day.

Posted in 3D printer | Tagged , | Comments Off on A 3D printer diary – supports

A 3D printer diary – getting going again

Our 3D printer (Dremel 3D20) was a bargain, because bits were missing. After five emails to Dremel support, and various phone calls, I eventually found a supplier who was willing to order the spare parts that I need – mainly the spool lock so that the spool of special Dremel PLA will rotate freely.

Using some of the spare filament that my dad gave me, I attempted to print a small box. This box has feet, so the area touching the build plate is relatively small, and the bottom of the box is off the ground. After about 15 minutes, the printer had created a stringy mess and had pushed the model sideways on the build plate … and was now attempting to add fresh filament on top of empty space.

A stringy mess – after the print head nudged the model out the way and tried to print filament on empty space.

Underneath – another stringy mess. Apparently there is software that can add supports to overhanging structures.

The main lesson learned today is to add supports to the model before printing it. That’s a job for another day.

Posted in 3D printer | Tagged , | Comments Off on A 3D printer diary – getting going again

Vintage plastic toy train

Ecoiffier toy train. Date of manufacture unknown.
Photographed at my parents-in-law

Posted in Trains | Tagged , | Comments Off on Vintage plastic toy train

AWS Storage capacity

I remember when I thought a gigabyte was a lot. I bought an external 1GB hard disk in 1995, and filled it up in no time at all. As geeky things go, it was pretty exciting.

Hadoop is designed for petabyte-scale data processing. Hadoop’s filesystem, HDFS, has a set of linux-like commands for filesystem interaction.. For example, this command lists the files in the hdfs directory shown:

hdfs dfs -ls /user/justin/dataset3/

Like Linux, other commands exist for finding information about file and filesystem usage. For example, the du command gives the amount of space used up by files in that directory:

hdfs dfs -du /user/justin/dataset3
1147227 part-m-00000

and the df command shows the capacity and free space of the filesystem. The -h option displays the output in human-readable format, instead of a very long number.

hdfs dfs -df -h /
Filesystem              Size     Used     Available  Use%
hdfs://local:8020       206.8 G  245.6 M    205.3 G   0%

The hdfs command also supports other filesystems. You can use it to report on the local filesystem instead of hdfs, or other filesystems for which it has a suitable driver. Object storage systems such as Amazon S3 and Openstack Swift are also supported, so you can do this:

hdfs dfs -ls file://var/www/html    the local filesystem
hdfs dfs -df -h s3://dataset4/      an Amazon s3 bucket called dataset4.

Here is a screenshot showing the results of doing just that (from within an Amazon EMR cluster node).

It suggests that the available capacity of this s3 bucket is 8.0 Exabytes. This is the first time I’ve ever seen an exa SI prefix for a disk capacity command. As geeky things go, it’s pretty exciting.

I assume this is just a reporting limit set in the s3 driver, and that the actual capacity of s3 is higher. AWS claim the capacity of s3 is unlimited (though each object is limited to 5TB). AWS is constantly expanding, so it is safe to assume that AWS must be adding capacity to s3 all the time.

The factor limiting how much data you can store in s3 will be your wallet. The cost of using s3 is charged per GB per month. Prices vary by region, and start at 2.1c/GB per month (e.g. Virginia, Ohio or Ireland). For large-scale data, and for infrequent access, prices drop to around 1c/GB. Assuming you don’t want to do anything with your massive data-hoard.

Using “one-zone-IA” (1c/GB/month), it will cost US$ 86 MILLION a month to store 8 EB of data, plus support, plus tax. If you want to do anything useful with the data, a different storage class might be more appropriate, and you should also expect significiant cost for processing.

Justin – February 2019

Posted in Computing | Tagged , | Comments Off on AWS Storage capacity

Mixing hobbies – Lego train on 16mm Garden Railway (part 2)

In my last post, I berated the fact that a inside-frame chassis on 16mm track wasn’t possible in Lego. The gauntlet was picked up by my friends in the adult Lego fan community, and a potential solution looks like this:

Top: Outside Frame bogie in Lego on 16mm track
Middle: Inside Frame bogie in Lego on 16mm track
Bottom: Lego track

This solution uses a piece called a “Minifig bracket” – the thin L-shaped piece has a hole on one plate and a stud on the other. In the right orientation, they can be used to hold the side frames rigidly to the middle bit. a 1×1 brick with Technic Hole (in the middle) supports an axle, which has a stop at one end that is a friction-fit in the hole.

Deconstruction of 16mm inside-frame bogie out of LEGO parts.

The deconstructed view shows the brackets in more detail.

This exact build has another problem: the axles rub against the tan-coloured end bricks. The easy solution would be to make the side frames 10-studs long instead of 8. There might be another solution … so watch this space.

Justin – February 2019

Posted in 16mm, Garden Railways, Lego, Trains | Tagged , , , | Comments Off on Mixing hobbies – Lego train on 16mm Garden Railway (part 2)