In my previous article, I revealed how easy it is to use an open source database called SQLite on our IBM i server.
If you need a refresher, that article can be found on the link here.
In this article, I’ll show you all how easy it is to use this lightweight database. Including it within one Node.js program.
To recap, SQLite is a serverless database and self-contained; it is ideal for applications that need to store information without all the full relational database server features.
In this article, we will use Node.js to call a Twitter API to get tweets with the hashtag #IBMChampion.
Once we have retrieved these tweets, we will store them in an SQLite database.
To wrap things up, we will schedule this program to run every 24 hours to give us a good representation of data.
To use the Twitter developer API interface to retrieve tweets, we must register for their service.
This is achieved at https://developer.twitter.com/en/portal/dashboard – there is no charge for this service at the basic level.
Once we have registered, it will give us the three API keys that we need to access Twitter from our Node application.
- An access key.
- An access secret.
- A Bearer token.
Keep them safe and do not share them.
Let me break my program down into 3 sections, which will allow me to explain the steps within.
- Define the node modules we are going to use
- Create a twitter object, called client, using my Twitter API keys
- Define the SQLite table that will store our Tweets
The next part will move onto the SQLite database definition.
- Make a database object, called db, storing the name of the database in the /db subdirectory and calling it f_tweet.db
- Check for any errors in creating the database
- A quick debug message to show ‘all is well’
- Create the table called tweets. The table definition is stored in the variable sql.
- Set up the parameters for the Twitter API call. It will search for all IBMChampion tags. For this example, it will only retrieve 50 tweets.
- Call the Twitter API, using the search/tweets endpoint, with the parameters set in the params variable
- Check if any errors are returned from the API call
- Pick the required data from the API call. This is where we store the twitter ID, the tweet text and the timestamp it was created.
- Store the twitter values in parameter markers to stop any SQL injection problems
- Create the insert SQL statement
- Run the insert SQL statement, passing the parameter marker variables
- Check if we have already stored the tweet
- Another quick debug message showing the record has been stored in the database
- If there were any errors from the API call, log them to the console
And that is it – How easy was that to store information in a database
Run the Program
In a shell, we will execute our Node script.
You can see part of the console log in the image below.
If I run a quick sql on the database using the SQLITE CLI interface, we can see all the records it has stored.
To schedule the program to run daily, we add the following job to our job scheduler.
CMD(QSH CMD(‘cd /powerwire/tweets && node gettweets’))
TEXT(‘Get Champion Tweets’)
As we can see from this example, SQLite is a great addition to allow us to store information quickly and easily. No messing with server jobs, all the database configuration is contained within our application.
My next article will show how easy SQLite can be used to analyse the information we have stored from Twitter.
If you have any questions, either on this article, or anything else open source, use the comments below, or send me a message on twitter @AndyYouens
Andy Youens is an IBM i consultant/instructor at Milton Keynes, UK-based FormaServe Systems with over 40 years IBM midrange experience.
IBM Champion 2021