Archive for the ‘Typica’ Category

This blog is about compiling the 1.7 release of Typica and the Phidgets libaries for Linux. Neal is currently working and hoping to release 2.0 sometime in the future and all this will drastically change.

Linux is a fantastic community driven operating system. It is Free and Open Source Software (FOSS), meaning someone cannot charge for the software and also provide you the source code to do what you want with it.

Typica is also FOSS. Neal provides binaries that will run in the Mac OSX world and Windows world so that  you do not have to compile it. And this works for probably 90% of the people.

I like Linux for many reasons — the freeness, the freedom, the community driven development, and also the ability to re-use hardware that might not run the ever larger and slower Windows or Mac OSX operating systems. Sometime the cast away hardware from those operating system spheres still has good life left in it and it doesn’t have to be junked.

I was using the Mac Mini for my database server and am still using an Dell laptop for my Linux compile environment and roast-logger. It is a very old system, while this XT has a touch screen it is only a 32-bit CPU. If you follow my guide you will have to keep this in mind and figure out some of the 64-bitness differences. Feel free to leave those as comments to this article.

I am using Linux Mint, a debian/ubuntu based system.

Install Packages

You will need to update your system to the latest release. Following that, install some additional packages.

sudo apt-get update
sudo apt-get upgrade
(possibly reboot)
sudo apt-get install libqt4-sql-psql libqt4-dev libqt4-dev-bin \
 libusb-1.0-0-dev libc-dev-bin gcc-multilib g++-4.6 g++ \
 libqtwebkit-dev libqtwebkit4

 

Source Setup

You need a directory for source code if  you do not already have one. Make a directory called something like “src” in your home directory unless you have a preference already setup. In this directory you will setup two trees — one for Phidgets libraries and one for Typica itself.

Phidgets Library Compiling

Go ahead and open a terminal window and CD into your source (or src) directory. You need to download the latest  Phidgets source into this directory. I like to browse to their page for Linux support and copy the URL and then use “wget” to fetch the file to my current directory on the system where I will compile. It saves a step or two of having to copy it later.

The current directory is here, but that could change and you can use Google or whatever search engine you like best in the future… Once you have downloaded the file, extract it, cd into the new directory, and compile the libaries. Since you have installed all the packages listed above, it should just be as simple as the instructions on the Phidgets site, also included below:

wget http://www.phidgets.com/downloads/libraries/libphidget.tar.gz

tar xfvz libphidget.tar.gz

cd libphidget-2.1.8.20151217

./configure

make

sudo make install

If your Phidgets source directory looks differently than mine you will need to update that in the instructions below for Typica where we modify the project to include the headers for Typica.

Typica Compiling

You should “cd” up one directory to the “src” parent directory. Now download the Typica (probably 1.7) source code from the Git site. If you hover over the “Download Zip” link this should be a URL to a “master.zip”; copy the URL and use it with wget again. After downloading the zip we will uncompress it which will create many directories. We will “cd” into the first level and then into the src sub directory and we will create a backup file of one file that we will change.

 wget https://github.com/N3Roaster/typica/archive/master.zip


unzip master.zip


cd typica-master/src


cp Typica.pro Typica.pro.orig

Now we are ready to get nitty gritty. In previous blogs I was able to get around the Vi vs. Emacs vs. Nedit debate by having you use an echo command to append new statements to a file.  This appears to hold true. So even though I would really love to make sure you embrace the power of the Vi side, I will again use the echo-append statement in my examples for brevity’s sake. It is very important that the you add three separate and distinct lines to the file.

echo "include(3rdparty/qextserialport/src/qextserialport.pri)
INCLUDEPATH += /home/kevin/src/libphidget-2.1.8.20151217/
LIBS += -lphidget21
" >> Typica.pro

You are now ready to build the Makefile and then compile the program. First you execute the “qmake” program to create the Makefile, and then you run the “make” program to compile the code into executable binaries.

qmake

make

Bam! You are ready to execute the new compiled program. Unfortunately because of the libraries you will need root privilege again, so you must execute the “Typica” binary with sudo in order to run it as yourself but with root privileges.

sudo ./Typica

If everything worked successfully, the program is now running and asking how to connect to your database.

 

 

The inevitable “oompf” happened much sooner than expected: the Mac Mini linux server died.

I hadn’t even fully tested my backup method! Nothing like trial by fire to make the theory real.

If you don’t want to read the whole article and just want to know, “did it work?” Yes, it worked perfectly.

I had a very old Dell laptop (older than my Mac Mini, actually) that I was setting up for my Typica compile environment. I also used it to verify some of my commands in the previous articles for installing and setting up the postgres database under linux. The only thing it hadn’t done was actually create a database or connect to the cloud. To sum up the situation, it was perfectly poised to now have my database restored, though far from ideal.

The steps were simple:

  1. Install PostgreSQL database software
  2. basic configuration of postgresql (network connectivity)
  3. create the typica database instance
  4. create local user and grant it access to rights to the typica database instance
  5. restore the database-dump file from the cloud

Steps 1-4 are outlined in my initial blog about creating the database. The only thing missing is running the command to restore the data from the dump file.

First, let me explain about the dump file. The dump file is not a copy of the database as it runs. The dump file is actually a text file containing SQL statements that will completely re-create the database. Restoring from these dump files is fairly simple and straight forward. A very large file may take some time, but mine was extremely short.

Once you have created the database and granted your user the permissions to the database, you would need to download your dump file from your cloud backup and copy it to your new database system. I simply navigated to the latest dumpfile in my Google Drive and downloaded it to my regular Windows system. I used PSCP (from Putty) to secure-copy the file dump to my new database host.

Back on the database host I exited my psql command where I created the database and setup the user privileges to it.

I ran three commands. The first two commands were verify I had “read” permissions by listing the directory contents and a “tail” on the dump file to make sure the postgres user had read privilege to the file. The second command is the postgres command to restore the file. Here is the session nearly verbatim:

postgres@roastlogger ~ $ psql
psql (9.3.12)
Type "help" for help.

postgres=# create database typica;
CREATE DATABASE
postgres=# create user roasty with password 'coffeecoffecoffeecoffee';
CREATE ROLE
postgres=# grant all privileges on database typica to roasty;
GRANT
postgres=# \q

postgres@roastlogger ~ $ ls ~kevin
Desktop Downloads Music Public Templates Videos
Documents gdrive Pictures src typica.export.201604100718

postgres@roastlogger ~ $ tail ~kevin/typica.export.201604100718
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO PUBLIC;


--
-- PostgreSQL database dump complete
--

postgres@roastlogger ~ $ psql typica < ~kevin/typica.export.201604100718
SET
SET

And I cut the output short for you. It re-created the database in a very short time.

I then launched my recently compiled Typica program and it immediately asked for how to connect to a database since the previous database is kaput. I input the configuration items and was back in the running.

I think the Mini died from  at the power supply during a recent storm. It was over six years old so I will be replacing it with a new model QNAP home-nas. The QNap will be able to run Samba, PostGreSQL, and Plex with no issues (I also bought ram to upgrade it to max supported RAM).

Computers need backups. Hardware will fail. Are you prepared for the disaster? Will you know how to recover the important data?

I like backing up my data to Google Drive. I bought a Chromebook years ago, and have been a Google user from the Gmail invite days, so I have around 120GB of free cloud-based storage at my disposal. If I ever fill it up, their rates are pretty low to expand. So, given this, I would really sleep better if my database for Typica, running on a six year old Mac Mini, was exported every few days to my Google Drive for safe keeping.

Here’s a funny thing. Google is a firm believer in Linux. They use Linux on their hardware, the Googler’s desktops reportedly mostly run Linux, the Chromebook is Linux, and Android is essentially Linux. But yet, there is no official Google Drive client from Google for Linux. Seriously. This is so odd to me.

However, a Googler has contributed to an open-source product that is a useful method of uploading database exports to Google Drive.

Exporting the Database

In the previous blog for setting up the database I had you configure your user account for special privileges for backups. This is so you could run the pg_dump command like so:

 pg_dump typica > typica.export.$(/bin/date +\%Y\%m\%d\%H\%M)

You should try this now and make sure it works without access errors to the database. This is the basic command for exporting the database and it needs to operate cleanly before we can proceed. Feel free to use more or less on the export file to examine the SQL commands that will rebuild your database after a disaster.

In the example I created the destination filename with a date string. The date string is created automatically at run time because the system executes Yet Another Command to capture the current date in a convenient sortable format: YYYYmmddHHMM. This is specified with the symbols “$(” preceding the date command, and a closing parens following the command.

Google Drive client

The software package I am using to send my backups to the cloud is called GDrive. It is coded with GO, the google language. You can obtain the source and some binaries here. I punted on building it from source for the moment. I downloaded the binary put it in /usr/local/bin.

Once you have the executable in your path, begin with getting the authorizations setup between your local computer accounting using gdrive and your Google Account.

Authorization

You can execute “gdrive help” to see a lengthy list of operations you can perform with the client. It is very full featured, just perhaps a little unwieldy for complex operations. Fortunately for what we want, it will be pretty easy to keep it simple.

To get the Authorization setup between Google and your computer execute the gdrive command with any operation besides “help”. I recommend to start with “list”.

user@local~ $ ./gdrive list
Authentication needed
Go to the following url in your browser:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=36713-farfarfarfapipick.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state

Enter verification code:

The first time you run this it will prompt you with a URL and wait for your return an authorization code to allow it to proceed. Follow the instructions on screen.

Setup a Google Drive Subfolder

A “gdrive list” should list a confusing array of folders and documents in your Google Drive. It will be something like this:

0B4UCLDdeCDdisUdN33dtHeRealC0dE  typica.export.201603230645 bin 64.4 KB 2016-03-23 06:45:02

You cannot operate on files and folder with the logical pretty names you see in the browser or other clients. You have to use the confusing code that precedes the logical information. By default it will only list 30 items and they seem to be those most recently modified.

Go to Google Drive in your browser and create a folder that you wish to contain your exports — something descriptive like “Typica Database Exports”. Go into that folder and observe the address URL. The non-English code following “folders” is what you need to specify for the location for your files that you wish to upload.

https://drive.google.com/drive/folders/3BuSyB33sW3nt2SaMEfL0w3r1Day

You can copy it from the address line or by using “gdrive list”.

Now we will attempt to upload the file you created in your pg_dump demo into your specially prepared folder. We specify the folder using the command line option “-p” and the code following it. The filename to upload comes at the end of the command line.

gdrive upload -p 3BuSyB33sW3nt2SaMEfL0w3r1Day typica.export.201603172133

The file should show up in the folder in your browser in short order.

Export to GDrive through pipeline

I think the ideal way to export the database to the cloud is to not even write it to disk before uploading it. If you write it to disk you will just have to delete it. The export to pipeline works for now. I don’t know how large it will grow to be and if this pipelining method will be a problem with memory in the future but that is something you will need to consider and I may have to revisit.

In the first export I had you perform I had you redirect the output to a file through the greater-than symbol. If you did not specify this symbol it would have printed to screen.

We can use another type of redirection called a pipeline, or special character “|”. This tells the Linux system to send the output to another executable as input. The gdrive executable has an option for the upload to specify the STDIN input from a previous command; this is the dash (“-“) in my example below. We still use the “-p CODE” options to specify the destination folder, and then we still follow it with the name you want this file to called when it is created in the cloud.

One last item before you schedule this in the cron scheduling facility. We will want to redirect both STDOUT and STDERR (2>) to a file. If we do not, the local mail facility on the Linux system will end up with many extraneous messages from the executions of the command.

Test this final command from your terminal. When it executes cleanly, with nothing written to the display, schedule it in cron:

/usr/bin/pg_dump -d typica |/usr/local/bin/gdrive upload - -p 3BuSyB33sW3nt2SaMEfL0w3r1Day  typica.export.$(/bin/date +\%Y\%m\%d\%H\%M) >/home/kevin/db_backup.log 2>&1

 

The Typica software was thoughtfully built around the commercial-grade relational database (RDB) system PostgreSQL. Think of Oracle RDB without the thousands of dollars in licensing because it is open-source like Typica and thousands of other software packages. Open Source means the developer(s) make their code completely available so anyone can download it and compile it to run. In so doing the original authors hope these new people will contribute to code by finding and fixing bugs, suggest and/or write new features to improve, documentation, and perhaps even make packages ready to install available to other potential users.

The PostgreSQL software can run on just about any operating system: Windows, Macintosh OSX, Linux (including many home Network-Attached Storage products), Solaris, BSD, and the list could go on. This is due to both the open-source nature of the software and some investment from corporate sources that like the software alternative RDB.

I picked Linux. I will have been using Linux for 20 years come late 2016. It is completely open-source like Typica and PostgreSQL.

I picked my old Mac Mini (2006) to run the software. Apple hardware is generally top-notch. The OSX operating system used to be one of the best but lately I have been a little disenfranchised with direction that Apple has gone with it. I am referring to not just the color scheme, but some major changes that have been step backwards in terms of interoperability. If the OS is going to be hampered with bugs, the GUI ugly, and slow down my old system to molasses-in-January speeds…” well,” I said, “I might as well run Linux and at least get the speed back”. I loaded the Mint distribution last year and then loaded Plex Media Server to share the music library to the Onkyo amp. It is not very useful to me and this platform will make a great hosted database for the home network.

Why Hosted Database?

Why should anyone care about a “hosted database”? A hosted database means the database is running on a different system than the software that will use the datbase. The two systems will communicate over a network connection. This type of setup is not completely necessary since the database could run on the system running the software. However there are some distinct advantages for having a hosted database.

I choose to have a hosted database so that I can use the software from a couple of different systems and still access the same data. For instance, my fancy new Surface Pro should probably stay in the house away from the fire in the roaster and possible tumbles to the concrete floor. So it should not be doing the roast-logging, but it would be very useful for capturing my tasting notes.

So I will use an old laptop to start roast logging. It has been around a few blocks so it could die during this process and I will very likely need to replace it with something like a netbook. Or a retired thin client. Having a hosted database means I do not have to worry about getting the data off a broken system, but rather can install on a new system, connect to the database, and keep on truckin’.

Install and configure PostgreSQL

I will run you through the steps of installing PostgreSQL on a Linux system. If you choose Windows to host the system you will have a very different experience but you may still find this educational. Likewise a Mac host experience will be different though there will be some similar concepts.

If you are using Mint (linux distribution) or other similar Debian package based distro you should find these example instructions are easy to follow. If you are using a Redhat RPM based system you will find things very similar. However you will have to do a little digging to find the transliteration for those package names and “yum” commands versus my apt-get commands.

Install Packages

The system will need to have the PostgreSQL packages installed. The list of package would likely look like this:

  • libpq5:amd64
  • postgresql-client-common:all
  • postgresql-client-9.3:amd64
  • postgresql-common:all
  • postgresql-9.3:amd64
  • postgresql:all

But it is much simpler than. You can install the main package and all the dependencies with a single command. Make sure the system has internet access and from a terminal session enter the command:

"sudo apt-get install postgresql"

Keep your terminal open, you will need it for the following sections as well.

Allow Network access

The software will be secure by default and not allow incoming network connections because that is a risk for a system not properly secured. We will have to accept that risk and configure the database to listen and allow remote connections.

In your terminal, issue the command to change your working directory to “/etc/postgresql/9.3/main/”. Use the tab “auto-complete” ability as your version of PostgreSQL could be update from my example.

Save original CONF files!

I always recommend saving config files before making changes. I don’t always remember and when I don’t I wish I had… So here is your reminder — make an “original” file before making changes. In this particular directory this can be done with a single loop-style command:

sudo for i in *.conf; do cp $i _orig.$i ; done

This will loop around each file ending with “.conf” and make a copy starting with “_orig” as a filename prefix.

Modifying pg_hba.conf

Next we will edit the “hba” file. This file contains how this host is allowed to share it’s network with PostgreSQL. Before we can edit the file we must know what your subnet is for your local network. This could get dicey for you if your local subnet is completely IPv6. Hopefully if it is you know this and do not need this guide for the details but the general information.

Find your IP Address and subnet information

So assuming your have no idea what I’m talking about, run this command on your host system to see what your IP address is:  “ip -4 addr show”. The output will look similar to below:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.1.109/24 brd 192.168.1.255 scope global eth0

The information we are looking for is associated with device #2 and not the localhost address. The second line contains the IP address (192.168.1.109) and the broadcast (192.168.1.255). This tells us that we will use 192.168.1.0 when we edit the file, next. By using the zero we will specify the entire group of addresses expected, from 1 to 254.

Edit the file

If you have edited files on unix before use your favorite editor and add the following line to the very bottom of the file.

host all all 192.168.1.0/24 md5

If you have never used a file editor in Unix I would love to introduce you to VIM (VIM is not incorrect Roman numerals for 994). But that would stretch this already length blog post into something else entirely. Let’s use this shortcut of using echo to append to the file. For this to work you MUST absolutely use two Greater-Thans, and you should modify the subnet to match yours if it is different. The magic editorless command is:

sudo echo "host all all 192.168.1.0/24 md5" >>pg_hba.conf

One last thing to note. If you have an overly complex network you might need to repeat the network discovery and add an additional line if the clients that are connecting to your database show an entirely different network than the host. And pray that the two networks are routing to each other.

Modify the postgresql.conf

This file must also have one line added to it. This line is simple and intructs the database to listen on any network interface available.  The line can appear at the bottom of the file, so the magic echo command will save you have to learn a Unix file editor if you have not already made a choice for VIM (the right choice) or a wrong choice.

sudo echo " listen_addresses='*' " >>postgresql.conf
Restart the database software
sudo service postgresql restart

Setup User access

The database will run as user “postgres” (more than likely). I am going to recommend that you connect to the database as a user called “roasty” or at least “typica”. However, you could simplify and run make this connection as yourself with your own credentials. It will save one small step with the backups later on, but it is small step and for good security practices I recommend you do not. Just make sure you document the password somewhere.

Create local user (if not yourself)

Choose your username now and run the following command (substituting your username for “roasty” every time you see it in this guide.

sudo useradd roasty

Database operations

We will use the sudo and su commands to become the database operator which is probably postgres. Then we will issue commands to create the database “typica” and grant access to the “roasty” account and yourself, assuming you will be doing the backup operations under your account.

Become the database operator and then execute the “psql” process which allow you perform the necessary operations within the database software.

sudo su - postgres
psql

You are now currently “in” the PostGreSQL database software. You should feel like you zipping around on a lightcycle right now. No? Huh… Right, then, first step is to create a database called “typica”. Be sure to end each line with a semi-colon before pressing enter. If you forget just press the semi-colon on the new line and then press enter again. It still works.

create database typica;

You worked up a sweat creating a database I’m sure, so take a break and have some coffee. No, seriously, the hardest part is definitely passed us so you deserve a break. When you come back we will make sure the user(s) for the software and your backups have a password and grant the necessary privileges to the database you created.

create user roasty with password 'passwordwhichImsureisaboutcoffee';
grant all privileges on typica to roasty;
CREATE USER yourusername WITH PASSWORD 'evenbetterpassword';
GRANT ALL PRIVILEGES ON DATABASE typica TO yourusername;
grant USAGE on SCHEMA public to yourusername;
grant USAGE on SCHEMA typica to yourusername;
grant USAGE on SCHEMA typicafeatures to yourusername;
ALTER ROLE yourusername WITH Superuser;

After the last line you can exit by pressing Ctl-D.

As you can see the “yourusername” had to get a lot of extra roles so that he could perform the database export for backups. This could be done as the database operator (postgres) to save this step and limit access to yourusername but what I have in mind for backups is easier managed under your user account.

Database is Ready!

The database is ready for the software to connect! Remember this hostname or IP address, the name of the database, and the password and username your have specified and go start the software. It will ask  you for this information.

 

It’s been a while

Things have been super busy for the last year and not much noteworthy or mentionable to blog about in either my coffee or tech world. It’s been too boring to write about.

OK that is totally not true. There has been the excitement of fresh lots of coffee from Sweet Maria’s, and then there was the night when I had several people over for a roasting demo (will definitely do that again soon).  AND then there was our visit to the Diria Coffee Processedora in Hojancha, Guanacaste, Costa Rica. I definitely should have blogged about that. That was awesome.

But that was months ago, I don’t think I could capture all the excitement and joy of that visit. I’m sorry…

So here we are, and in the words of Buckaroo Banzai, “no matter where you go, there you are.”

So let’s start over. I’m Kevin, I’m a little nerdy and techy and love coffee. So the next several blogs (I’m envisioning several) will combine both Coffee AND Tech as I explore the Typica program for roast logging. This will be a win-win of instead of just writing about Coffee OR Tech.

Roast Logging and Profiling

I’ve always wanted to have more data from my roasts. I believe that I need to track a roast through temperature and activities in order to understand it. This is the beginning of making a roast profile. A roast profile is simply a guide to turning the beans brown in the best way possible. A profile could be simple, like a flat even climb in temperature until end of roast; or a profile can be complex with a couple of cycles of heating and reducing heat. The different approaches to  cooking the beans can change the beans flavor and improve enjoyment.

The reason is simple– the process of cooking any food create chemical reactions. The chemical reactions break down or create chemicals in the food items, coffee included. Some chemicals have a taste that may be desired and others perhaps we want to avoid. I’m far from an expert on this process but I believe that data logging can assist me in my hobby anyway.

My goal is to capture the data from roasting so that I repeat a great roast or avoid making a mistake again. This type of logging is extremely important to commercial operations who have a lot at stake every time they put some of their investment into the fire. They want to make sure they don’t ruin a batch, and always if possible, improve on a previous batch.

Home roasters can benefit from this as well. We may not have the same investment quantity of beans and as much at risk, but we can benefit from logging.

I have filled half a dozen notebooks with handwritten notes for the last nine years. I’ve also tried several methods of computer or tablet logging. They are ok, but they are difficult to refer to with any precision when it comes to repeatability. And the real challenge with any method of roast-logging or note-taking is also tracking cupping and tasting notes.

I first found Typica a number of years ago while googling. I was interested in the software, it was opensource and used a real relational database– postgres! These are exciting things to a nerdy tech-minded homeroaster! At the time I investigated it a little and even conversed with Neal the author, but decided not to invest in the expensive device it required. Neal required any hardware to be open-source friendly and at that time that was pretty much only the National Instruments expensive data logger device. I couldn’t justify the expensive hardware for hobby operations.

I tried a DMM (digital multimeter) with temp input and optical RS-232 for connecting to the computer. Logging this data was something, but cleaning it up to make a graph in Excel or Google Spreadsheets proved to be very time consuming and difficult. The paper logs continued until I began using an Android tablet and an app called Coffee Roaster.

The Android app was almost good enough for recording data manually and making graphs. I configured it for two temperature trackers and used one to track my burner levels and the second for actual temperatures seen at the DMM. I lost data several times due to upgrades and corrupted database. Through it all I kept at least basic written logs to at least keep track of my numbering ( I did roast #396 in the RK+Grill last week).

Typica Again

Fast forward to late 2015 and I looked at Typica again because of my recent ennui with paper and the android app. Imagine my surprise and joy to find that it can now use a Phidgets device which costs a fifth less than the NI devices. This is hobbiest territory.

I will begin using this but it will take a bit to setup the way I want. This is not quite as simple as just install and make a few configuration likes the Android app. This is a little more complex. But it doesn’t have to be scary. I hope my next few blogs can make it less complex for someone else.

I am using an old Mac Mini computer for a NAS and database host. I was going to use my new Windows computer to run the application but as I began experimenting and thinking about the risk of fire and heat on it, I think I might use an older laptop and/or netbook to do the logging. The logging computer may run Linux– stand by and read on as I discover the path forward.

I’ve already found a limitation that many home roasters will face and will work out a change in code to fix it. Neal created the software for commercial roasters. Commercial roaster can log the bean temperature which is ideal since they have a non-moving piece of the drum. They can put the temperature probe right into the beans. The RK drum and grill setup has one full drum that rotates completely, no fixed piece to mount a probe in the beans. The best I can do is to log the temperature in the chamber outside the drum. This at least gives me data about how hot it is with the burners going. The limitation I hope to patch sets the graph scale at a top of 500F, plenty of head room for a bean probe. But outside temps? Much too low, it should be double that.

Looking forward

As you can tell, I’ve already done a little bit of setup on this so I have an idea of what I can write about. Neal documents many typical scenarios on his site. However, I see where I can provide some additional information in the next few blog entries.  I will write with a little more detail about setting up a hosted/shared database, including how to do backups of the database. I have my backups being copied to my Google Drive account directly, so from database to cloud copy for offline and offsite. I will also attempt to document and write about setting up the compiling environment and my attempts to provide a more home-roaster friendly scaled graph.

Beyond that I imagine I will start trying to document the tastes of the roasts and see if I can improve my roasts through better profiles. We’ll see where this goes.