Ubuntu supports the ability to upgrade from one LTS to the next LTS in sequential order. For instance, a user on Ubuntu 16.04 LTS can upgrade to Ubuntu 18.04 LTS, but cannot jump directly to Ubuntu 20.04 LTS. To do this, the user would need to upgrade twice from Ubuntu 16.04 to Ubuntu 20.04.
For a complete list of releases and current supporting status see the Ubuntu Wiki Releases page.
Upgrade checklist
Check the release notes for the new release for any known issues or important changes.
Fully update the current system. The upgrade process works best when the current system has all the latest updates installed. It is also suggested that users reboot the system after all the updates are applied to verify a user is running the latest kernel.
sudo apt update
sudo apt upgrade
sudo reboot
Users should check that there is sufficient free disk space for the upgrade. Systems with additional software installed may require a few gigabytes of disk space.
The upgrade process takes time to complete, be patient.
Third-party software repositories and PPAs are disabled during the upgrade. However, any software installed from these repositories is not removed or downgraded. Software installed from these repositories is the single most common cause of upgrade issues.
Back up all data. Upgrades are normally safe, however, there is always the chance that something may go wrong.
Upgrade
It is recommended to upgrade the system using the do-release-upgrade command on server edition and cloud images. This command can handle system configuration changes that are sometimes needed between releases.
do-release-upgrade
To check for any available new versions to which you can upgrade, run the following command:
do-release-upgrade-c
This will check Ubuntu’s servers for any available updates and informs you which version of Ubuntu you’ll be upgrading to.
To begin this process, run the following command:
sudo do-release-upgrade
Upgrading to a development release of Ubuntu is available using the -d flag:
I’m migrating/spreading all my APIs from Heroku to the clouds with serverless platforms. Here’s a rough idea that implements a simple pageview counter with Deta.sh Micros (Node.js as the runtime engine) and Base database (of course, without any charges :partying_face:).
Before we start, some limitations of the Deta platform need to be noted.
For such a simple pageview counter, all these limitations won’t cause any problems.
Okay, let’s start our journey.
Install and configure the Deta CLI
First, install the Deta CLI.
For macOS and Linux:
curl -fsSL https://get.deta.dev/cli.sh | sh
For Windows PowerShell:
iwr https://get.deta.dev/cli.ps1 -useb | iex
This will download the binary which contains the CLI code. It will try to export the deta command to your path.
Once you have successfully installed the Deta CLI, you’ll need to log in to Deta with your credentials.
From your Terminal:
deta login
Create a micro project
Deta Micros (micro servers) are a lightweight but scalable cloud runtime tied to an HTTP endpoint. They are meant to get your apps up and running blazingly fast. Focus on writing your code and Deta will take care of everything else.
Initialise the project
To create a Micro, navigate in the Terminal to a parent directory and type:
deta new --node pageview
This will create a new Node.js Micro called pageview which will contain an index.js file.
Enter the pageview directory, and setup the dependencies:
cd pageview
yarn init -y
yarn add express
This initialised the project with the Express.js framework, with the following contents in the index.js file:
Don’t rename the filename of index.js and the application name of app as they’re required by the Deta Micros.
Then, just coding as normal Express app…
Deploy the project
After updating the dependencies, use deta deploy to update the Micro in the cloud:
deta deploy
Well, the starter project is up and running :star2:. We can now visit the endpoint (use deta details to find the URL).
Currently, only the GET/ router is implemented, which just returns a string Hello World.
The workflow is just like this, coding in the index.js file and then deploying to the cloud with deta deploy. Of course, you can spread the middlewares, routers, controllers, et al. in different folders and files as usual, here we just condense on the single index.js file for simplicity.
Connecting the Base database
Deta Base is a fully-managed, fast, scalable and secure NoSQL database with a focus on end-user simplicity. It offers a UI through which you can easily see, query, update and delete records in the database.
Okay, still in the pageview folder, add the deta package into our project:
yarn add deta
Since we’re connecting the Base database from the Micros, we don’t need to care about the credentials for the connection, valid keys are pre-set in the Micro’s environment.
To create a database named pageview, just update the index.js as:
constexpress=require('express');const{Base}=require('deta');constapp=express();// connect or create a databaseconstpv=Base('pageview');// ...
Fetch and update a pageview counter
Here, we’re going to set a POST router to receive the JSON data, update the database and then return the updated data to the client (the website).
Inside the Base database, we can use the url string as the unique key to store and query the data3. But you can also format the url to MD5 string or anything compatible as the identifying key, that’s really up to you.
constexpress=require('express');const{Base}=require('deta');constapp=express();// parse json data from request.bodyapp.use(express.json());// CORS...app.use((_req,res,next)=>{// you should know what you're doing here...// more information at// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Originres.set('Access-Control-Allow-Origin','*');res.set('Access-Control-Allow-Methods','GET,PUT,POST,DELETE');next();});// connect to the 'pageview' database in Baseconstpv=Base('pageview');app.post('/',async (req,res)=>{try{const{url,title}=req.body;constkey=url;// check the record is existing in the databaseconstrecord=awaitpv.get(key);// the default counter/hits numberlethits=1;// if the record exits, update the counter on 'hits' propertyif (record){hits=record.hits+1}constdata={title,url,hits};// put updated data to the databaseconstupdatedRecord=awaitpv.put(data,key);// return the updated data as a JSON object// structured as { title, url, hits }returnres.status(200).json(updatedRecord);}catch (err){returnres.status(500).send(err.message);}});// export 'app'module.exports=app;
On the client-side, we can use the Fetch API to send data (the url and title) to the endpoint (by deta details) and get the most updated pageview hits whenever a visitor hits a page :tada:.
For example:
constdata={url:document.URL,title:document.title};// your deta micro urlconstdetaURL='https://xxxx.deta.dev';fetch(detaURL,{method:'POST',headers:{'Content-Type':'application/json',},body:JSON.stringify(data),}).then(response=>response.json()).then(data=>{console.log('Success:',data);const{hits}=data;// the pageview number here...}).catch((error)=>{console.error('Error:',error);});
That’s all about this simple pageview counter with Deta.sh services. You can add more layers about security and functions with all the free resources.
In the real implementation on this Jekyll site, I’m using RudderStack to capture the pageview events (using Beacon API) and push them to the Deta Base database in the cloud. Thus, on the frontend, just need to GET the data with a unique id to retrieve the pageview counts. The id is generated with url and marked with MD5:
{{page.url|prepend:site.url|replace:'index.html',''|md5|slice:0,14}}
# output like
# 89a323f52193af
In the Micro, just add a new GET router like app.get('/:id', ...) for fetching data. Also, the recent visits are returned from the same GET request, see it’s live on the sidebar… :point_right:
Conclusion
The Deta.sh platform is great to serve microservices without any charges, just like the one about the pageview counter presented in this post. Don’t hesitate to use these resources to implement your ideas.
ACME stands for Automatic Certificate Management Environment and provides an easy-to-use method of automating interactions between a certificate authority (like Let’s Encrypt, or ZeroSSL) and a web server. With ZeroSSL’s ACME feature, you can generate an unlimited amount of 90-day SSL certificates (even multi-domain and wildcard certificates) without any charges.
Create ZeroSSL account
Visit ZeroSSL official site to register an account. All certificates issued with ACME will be stored in your ZeroSSL account dashboard for easy management (after acme.sh register).
Install acme.sh
acme.sh is an ACME protocol client written purely in Shell. It works on any Linux server without special requirements.
Update your Linux repo with latest CA bundle and patches from System Update else some issues will occur when generating your free SSL. Once completed then begin the below procedure.
First, you need to log in to your Cloudflare account to get your API key.
You can narrow the Cloudflare’s API token that is only for writing access to Zone.DNS for a single domain, then update variables in your environment by running the following commands in the shell (these variables will be saved by acme.sh):
After the cert is generated, files are stored in ~/.acme.sh/<example.com>/, but it’s NOT recommended to use the certs file in the ~/.acme.sh/ folder, the folder structure may change in the future.
You’d better copy the certs to the target location, or you can use the following commands to copy the certs:
The ownership and permission info of existing files are preserved. You can pre-create the files to define the ownership and permissions.
The cert will be renewed every 60 days by default. Once the cert is renewed, the Apache/Nginx service will be reloaded automatically by the --reloadcmd command.
Please take care: The reloadcmd is very important. The cert can be automatically renewed, but, without a correct ‘reloadcmd’ the cert may not be flushed to your server(like nginx or apache), then your website will not be able to show renewed cert in 60 days.
Renew the certs
Indeed, you don’t need to renew the certs manually, all the certs will be renewed automatically every 60 days.
However, you can also force to renew a cert by:
acme.sh --renew-d example.com --force
Stop cert renewal
To stop the automatic renewal of a cert, you can execute the following to remove the cert from the renewal list:
acme.sh --remove-d example.com
The cert or key files are not removed from the local file system, you can remove these files or respective directories (e.g. ~/.acme.sh/example.com) by yourself.