Today we provide a technical update and demonstration of SMS and—additionally—end-to-end encrypted XMPP messages on the chat application we’re developing, “Chatty”. Read more
Arguably the most critical functionality in a phone is the ability to make and receive calls through the Public Switched Telephone Network (PSTN), that is normal cellular calls using phone numbers. While at Purism we are eager to implement communication systems that enable much greater privacy and security than one can expect from PSTN calls, the PSTN is still the most ubiquitous network and for the time being we can’t very well go around selling a phone that isn’t able to make PSTN calls. Read more
Purism, the Social Purpose Corporation focused on software freedom, privacy and security, proves it is dedicated to making its products secure straight off of the factory floor. Now, new PureOS installations (including those provided with Librem devices) have AppArmor activated by default. Let us first look at what AppArmor is, and then why we chose it specifically to strengthen PureOS. Read more
On the road to a working mobile phone, doing some initial evaluation and testing of the current state of existing user interfaces and frameworks is key, to evaluate what can readily serve as building blocks and what needs work. Last weekend I did an initial experiment in getting Plasma Mobile working on our i.MX 6 based test development board, using a 4.13.5 Linux kernel and stock Debian Testing. Initially, I encountered a few problems with KWin not wanting to start a Wayland compositor due to not recognizing the device as OpenGL ES 2.0 capable and also not finding a few needed OpenGL extensions. After some digging and with help from Plasma Mobile developer Bhushan Shah we tracked this issue down to a bug in libepoxy that was solved a long time ago. Unfortunately, Debian’s packaged version of this library was very old, so I upgraded it to a newer version manually (and we will get it updated in Debian soon). This resulted in a working Plasma Shell on the device.
The next step was compiling and installing the Plasma Mobile components from current Git master repository and running the mobile shell. This initially led to graphical glitches in the display, which were caused by KWin running as root (which you should never do as a user, but I did not think this would also cause major issues just for testing whether the shell works or not). After switching to a regular user for running the KWin wayland compositor and removing a dead call to upstart from the plasma-phone launch script, I could start the Plasma Mobile/Phone shell with the
kwin_wayland --drm --xwayland plasma-phone command.
Here are the screenshots you have probably been waiting for the whole time:
Of course this is not a final product, by any stretch of the imagination. It’s simply a test to see that it runs. There is a lot to do in terms of performance improvements, as Plasma Mobile still runs pretty slow on this kind of hardware (which could be less of a problem in case we use the i.MX 8 platform). Also, these initial tests were done using recent—but not the most up to date—versions of Plasma, KDE Framework and Qt (KWin/Plasma 5.10.5, KF 5.37.0, Qt 5.9.1), while a lot of performance improvements and bug fixes went into the latest versions. So it is definitely worth switching as soon as possible to tracking KDE’s latest development releases to benefit early from improvements done in the whole stack.
In general, Plasma Mobile already provides a usable (albeit alpha-quality) mobile interface today. The Qt Quick/QML based Kirigami component library and interface guidelines also provide a nice framework for mobile application developers, that have been tested on Android as well, and works nicely on and with the Plasma Mobile shell.
We are looking forward to seeing what we can do with the Plasma Mobile shell in future. Many thanks to Bhushan and the KDE community for helping with some issues encountered when making Plasma work on the i.MX 6, and their plans on making a real Plasma Mobile alpha release soon. If you are interested in the Plasma Mobile roadmap, this recent post from Sebastian Kügler might be interesting for you.
In this short tutorial, I will show you how to watch your favorite YouTube videos without being annoyed by the ads or those random visuals popping around (like “annotations”). It will also improve your privacy by avoiding storing some history and cookies from watching those videos within your browser.
As a film maker, I think that displaying any kind of visual artifact (ads, comments/annotations…) on top of the video is degrading the artwork. It is like going to a museum and seeing Post-Its and stickers pasted all over the sculptures and paintings. How would a museum could justify such a business model? Of course, YouTube is not a museum and I don’t want to discuss ethics or business models here (maybe on another post?). YouTube is also a great source of inspiration and learning for me—I simply want a better viewer experience.
The solution to improve your watching experience is called GNOME MPV. It is a video player that lets you watch any video from your computer as well as remote videos like the ones from Youtube.
GNOME MPV is based on FFmpeg and is able to read almost any video format. It has a very simple interface and it is very fast. It has become my main video player.
I don’t think that GNOME MPV is currently the default video player in PureOS, so you may need to install it. It is very easy: open the GNOME software center (“Software”) and search for “GNOME MPV”. From there, click on the “Install” button. When done, just launch it.
Watching a YouTube video
On GNOME MPV, click on the “+” button on the top left of the window and select “Open Location”. A small dialog will appear.
In the text field, paste your Youtube video link and click “Open”. You can try with this example (A song from Free Music Archive): youtube.com/watch?v=4M9Puanhdac
Of course, I cannot guarantee that it will always work. Be aware that Youtube remains master of their videos and can decide which level of restrictions they apply to them. Also make sure that your system is up to date when problems occur. New versions with corrections may be available.
Play an entire YouTube playlist
You can also play an entire playlist. This time, just paste a YouTube playlist URL.
Note that for it to work, I had to remove the video id from the URL and only leave the “list” attribute.
You can test with this example: youtube.com/watch?list=PLzCxunOM5WFJ3B0F5AnUCwMBTlyq64vKP
From there, you may go to the menu button, on the top right of the window (the 3 horizontal lines) and select “Toggle Playlist”
I use Youtube as an example in this tutorial because it is the streaming service that I use the most, but GNOME MPV also works with Vimeo and many other online streaming services. Just give them a try!
Last week, after flashing coreboot on my Librem 13 (as a beta tester of the new coreboot install script), I came across a few problems with my heavily tweaked PureOS install, so I decided I would do a full, fresh install of PureOS 3.0 beta so my environment would be much closer to what a new user would expect.
While re-installing all my creative environment, I decided that I would do a quick tutorial on installing and using Jack as it is not straight forward and that there are not so many tutorials about it on the Internet.
What is JACK?
JACK stands for “JACK Audio Connection Kit”. It is a free software that lets you handle audio input and output between different applications.
You can see it as a set of audio jacks that you will be able to plug between different programs.
For example, you can use it to connect a software synthezizer (Yoshimi, ZynAddSubFX) to a multitrack sequencer (Ardour, LMMS).
You can use it to connect an audio editing software (Audacity) to a video editing software (Blender).
Many applications have Jack support. Here is a list from the JACK’s website.
As an example for this tutorial, I will show you how to use Yoshimi with Ardour.
Install the applications
First of all, we need to install all the required applications
sudo apt install qjackctl ardour yoshimi
Enable real time scheduling
Real time scheduling is a feature of all Linux based operating systems that enables an application to meet timing deadlines more reliably. It is also considered to be a potential source of system lock up if your hardware resources are not sufficient so, most of the time, it is not enabled by default.
As mentioned on the JACK’s website, JACK requires real time scheduling privileges for reliable, dropout-free operation.
There is a well detailed tutorial from the JACK’s team that describes how to enable real time scheduling on your system. I will go through the main steps here. It works for me on PureOS but should also work without problem on many other GNU/Linux distributions.
First of all, create a group called “realtime” and add your user to this group (replace USERNAME with your current login) :
sudo groupadd realtime sudo usermod -a -G realtime USERNAME
You can check that “realtime” is now part of the user’s groups by running the following command :
Also, make sure that the user is part of the audio group. If not, just add it :
sudo usermod -a -G audio USERNAME
On PureOS (and Debian), you should have a folder called /etc/security/limits.d. If so, just create and edit the file /etc/security/limits.d/99-realtime.conf with your favorite editor. (If you don’t see this folder, you need to edit /etc/security/limits.conf).
sudo vi /etc/security/limits.d/99-realtime.conf
Add the following lines and save the file :
@realtime - rtprio 99 @realtime - memlock unlimited
You need to logout and login again for the changes to take effect.
WARNING : You should only add new or existing users to the “realtime” group only if an application that they use (like JACK) requires it . By doing so, you give them pretty high privileges to interact with the process priorities, and this may affect the whole usability of the computer.
Before being able to connect anything with JACK, we need to set it up and start its deamon. For that matter, we will use QJackCtl which is a graphical application that controls JACK’s inputs and ouputs.
We will first make sure that JACK is setup correctly. Press the “Setup…” button.
I am not an expert with audio hardware and configurations and this setup is working perfectly on my Librem :
- Driver: alsa
- Realtime : yes
- Interface : hw:PCH
- Sample Rate : 44100
- Frames/Period : 128
- Periods/Buffer : 2
Save your settings and, on the main QJackCtl controls window, press the “Start” button. After a few seconds, you should see the “Connections” window popping up. This is where all the connections take place.
Connect Yoshimi to Ardour
Now, we are ready to connect our virtual jacks. It is time to open Ardour and create a new session. You should now see a lot more connections in the JACK’s connections window. It shows how Ardour interacts with the system’s audio inputs and outputs.
Let’s add a new track to Ardour. Click the menu “Track”->”Add Track, Bus or VCA…”. Call your new track “Drums” and set it as stereo.
Now you see 2 more Ardour inputs in the JACK’s connections window. They show the name of the audio track that we just created and they are currently connected to the default system’s capture device (the microphone). That is is not what we want so we will disconnect them.
Right click on one of them (Drums/audio_in 1) and chose “Disconnect”. It will disconnect the audio capture device. We will now connect our track to Yoshimi.
Open Yoshimi and wait for it to be fully loaded. You should now see the Yoshimi’s output appear on the JACK’s connections window. In order to connect the Yoshimi’s output to the Ardour’s input, just drag one on top of the other (make sure to respect the vertical order).
You are now ready to enjoy your fully operational free software powered professional music studio! 🙂
Please, feel free to comment this post or ask any question in our forums.
Have fun! 😉
Working with video files all day long makes me realize that formats are everywhere and the need for me to be able to freely manipulate them is constant. It is why I think that in term of multimedia creation and publishing, free formats are as important as free software. Free software will always more easily support free formats anyway. In another hand proprietary software may decide to drop support for any free or proprietary formats as they wish.
In that regard, my post production workflow tend to rely on free formats as much as possible. In the world of freedom, we are very lucky to have top quality free formats for multimedia production and I would like to share with you the main formats that are part of my post production workflow.
Note that this is my personal workflow, there may be better workflows, better formats, especially when working on sophisticated projects with a big team. This is just a basic reference that works for most of my projects.
The format you will need to deal with, when capturing video footage will depend on the camera you use. Most commercial cameras will record using proprietary formats and as of today, I don’t know any camera capable of recording using free formats. The Axiom by Apertus is a camera based on free hardware design but it is still under development and I have never had the chance to test one.
Usually, I have no control over this part but it is fine. The most important here is that my footage gets the quality that I expect.
I just make sure that the formats generated by the camera will be readable by my free software. Anyway, FFmpeg can read so many formats…
At this point, I may choose to keep my footage as it is or chose to convert it to a free format for storage purpose. This is very useful when capturing with a proprietary format so I get full control over my footage straight away.
The storing format should be lossless, which means that there will be no data loss during the conversion. This is the top quality footage that my final rendering should be based on.
When performing this task, I use the following :
- Matroska (MKV) format – Huffyuv (video) / FLAC (audio)
- Matroska (MKV) format – FFv1 (video) / FLAC (audio)
As both formats are lossless, there should not be any quality issue however while FFv1 generates a smaller file, it is, in my experience, slower when decoding and may affect the comfort of my workflow at some point. Usually, I prefer using Huffyuv.
When editing, the use of proxies can make your workflow much faster by requiring a smaller amount of hardware resources. A proxy is a low resolution and light weight version of the original footage.
Proxy files are very temporary and the final rendering don’t depend on them. In that regard, you may use the format that is the most adapted to you hardware speed. Kdenlive has an integrated proxy engine that lets you choose between MPEG1 and Xvid by default. These are not fully free so I would suggest using the following on a 640px wide output:
- WebM format – VP8 (video) / Vorbis (audio)
- Ogg format – Theora (video) / Vorbis (audio)
I have always found that VP8 decodes faster and feels lighter than Theora so my choice goes for VP8 here.
Compositing and color grading
I usually do all my editing with Kdenlive. When I need to do some advanced compositing, and color grading, I use Blender.
At this stage, I only care about the picture and put the audio aside. I generate image sequences based on my top quality footage and load them into Blender.
For color grading and full picture visual effects :
For animation compositing :
- PNG with transparent background
- Multi-layered OpenEXR (Very useful to avoid having too many rendered files)
There is one free image format that is still very young but very promising. It is called FLIF. It is a lossless image format that has a better quality for a smaller file size than PNG. At the moment of writing this post, it is only implemented in ImageMagick but I hope it will be adopted by many more free software in the future.
To be honest, I am not an expert in audio editing and my skills in this area are pretty basic. I mainly use 2 formats to manipulate audio files :
- Vorbis (compressed for quick preview and editing)
- FLAC (lossless for full quality, final rendering)
What would be creative freedom without non-restrictive sharing ?
As of today, the most common format on the web is, by far, MP4 (H264). It is a proprietary format and it is quite difficult to avoid. I don’t want to impose any restriction to my audience so I sometimes use H.264 when I have no other alternative, but in any case, I always use the following free formats that are compatible with many web browsers, many platforms and are perfect for streaming purpose :
- WebM format – VP8 (video) / Vorbis (audio)
- Ogg format – Theora (video) / Vorbis (audio)
Ultra-high definition (4k)
Producing 4K videos is also possible with a free formats workflow. More and more cameras are able to shoot in 4k and the Librem15 will give you the horse power to comfortably work with this kind of big resources. The workflow that I have presented here is adapted to any resolution. Just make sure that you do your final rendering with a format that is able to handle the highest resolutions :
- WebM format – VP9 (video) / Vorbis (audio)
If you need more information about free formats, you may check the full list on Wikipedia.
I hope this series on media files manipulation has been useful to you and as the Blender Foundation would say, “Creative freedom starts here!”
Happy freedom! 🙂
This is the second part of my articles about media conversion with free software. Here is part 1.
This time, I will talk about transcoding your files with a nice and flexible graphical user interface.
I have been looking for a free and open source front-end to FFmpeg that would just specialize in media transcoding, but I haven’t been able to find one so far. Instead I use the video editing software Kdenlive, that is built on top of FFmpeg and integrates a very good interface for media encoding.
PureOS, Kdenlive and my Librem 13 is a perfect combination for my free, libre video editing. Gosh, I don’t regret my old Apple/Adobe workstation!
Installing Kdenlive in PureOS or Debian
Kdenlive comes at its latest version on PureOS. If it is not already installed, open up a terminal and type :
You are ready to go!
Single file transcoding
There are several ways to transcode your files with Kdenlive. This first one will let you trim your file or add any effect to it. It is good for the purpose of transcoding a single file to be streamed on the web for example, or transcoding several files manually, one by one.
First of all, you need to open Kdenlive and drop the file you want to transcode in the Project Bin. You may just drag the file from your file manager into it. Kdenlive should display a message asking you to switch the project’s settings to match your file size and framerate. Just click “Switch”
Then, place this file on your timeline. Make sure that the clip is positioned at the beginning of the timeline. At this point, you may trim your clip or add any effect to it.
Click the “Render” button that is positioned on the top toolbar. You may also go to Project->Render.
The rendering window will appear.
The easiest way to encode is to :
- choose a destination and name for your output file;
- choose a predefined output format;
- make sure that “Full project” is selected;
- click “Render to File”.
You may also get access to more advanced settings by checking “More options”
There you can specify your own FFmpeg settings as well as rescaling the output. Note that the parameters in this windows are based on MLT which is a multimedia framework built on top of FFmpeg, so the syntax may differ a bit from FFmpeg. Here is the MLT documentation.
If you wish to add your own encoding profile to the list, just click on the “Create new profile” icon. A new dialog will appear with the settings of the current selected profile. Just update it and save it. You will then be able to select it from the list any time.
Multiple files transcoding
This time we will see an easy way to transcode multiple files at once. It can also work for a single file but won’t give you the ability to trim it.
Move all the files you wish to transcode to the Kedenlive Project Bin.
Select them all and right click on one of them. Select “Transcode” and choose the desired format here.
A popup will appear, giving you some options. Click OK. The transcoded files should be created in the same folder as the original ones.
If you wish to add your own profile to the list, go to “Settings”->”Configure Kdenlive…”. There, select the “Transcode” tab.
Selecting an existing profile will pre-load the fields. Modify the properties, give it a name and click “Add Profile”. You will then find your new profile in the “Transcode” submenu.
You may have noticed that the profile lists are different if you choose the first or second method. This is part of a traditional post production worflow.
Generally, the first method, which is rendering your timeline is used for the final rendering of your project. This final rendering may be sent to additional processing like VFX or color grading. It may also just be the final output ready for delivery (broadcast, web streaming etc…)
The second method is used to store your footage into high quality (lossless) video files to be used as the reference, highest quality video files during the editing. While the editing itself may be done using much smaller lighter “proxy” files, the final rendering should be based on these high quality files.
I will describe in details the formats (all free) that I use in every step of my post production workflow. Stay tuned! 🙂
While completing my next tutorial about media trans-coding with free software, I would like to share with you two great things that happened in the world of freedom, in term of art and creative tools, lately.
Kdenlive 16.08 is out!
This first one is an exiting one and is about the release of Kdenlive 16.08.
I often hear people saying that there is no good free, libre video editing software. Well it was also my opinion a few months ago but I don’t agree with this anymore. Kdenlive was already a very capable and solid video editor a few versions ago and, with this version, it is starting to be a very serious option for professional quality work.
Kdenlive is actively developed and the team of contributors manages to release 3 new versions a year. On each of them, they do a lot of bug fixing and a few amazing new features.
This new version adds a couple of very useful features like live preview rendering as a background job. I wish to say a big thank you to the Kdenlive team for their achievement!
I will come back to Kdenlive usage in future posts anyway, so stay tuned!
Working on Kdenlive with the power and comfort of a Librem is a real pleasure! The Librem is as fast as those free software are lightweight comparing to their proprietary alternatives. I don’t miss my old proprietary workflow at all. To be honest, I never had such a comfortable and fast user experience.
Check out my “Studio Libre”! It is where all the film and animation’s magic happens. It features the Librem 13, running PureOS 2.1 on an super fast M.2 SSD drive with some of my daily applications (Kdenlive, Blender, MyPaint).
Who said that in order to get freedom, you would have to sacrifice hardware speed and comfort ?
Note that the Cintiq tablet is plugged into my old computer (also running PureOS 2.1). I use it for animations and drawing clean-up. This heavy setup is to be replaced by a more powerful, libre and lightweight Librem 11.
Pepper & Carrot goes animated!
As an animator myself, I have been delighted by this news.
If you still don’t know Pepper & Carrot, you may check this website. It is a web comic, by French independent artist David Revoy, telling the story of a young witch (Pepper) and her cat (Carrot) in a fantasy world made of potions, magic and funny creatures. David Revoy, uses only free software (mostly Krita) to create this comic and releases it under a CC-BY license.
The animated version would be made by Russian independent animator Nikolai Mamashev, also using only free software and releasing it under a copyleft license (CC-BY-SA).
Both artists are very talented and know what they do.
I think that the success of this crowd-funding campaign is very important for the popularity of our philosophy of freedom as the web comic is starting to get a lot of attention (even from big comics publishers) and this animated version would get its public straight away. It would give a lot of credibility and popularity to free art, free tools and of course, it would show that success is possible with doing things the ethical way.
So if you wish to contribute or spread the word, here is the crowd-funding page.
On a personal note…
I wish to add that this post has been written from my own initiative as a supporter of libre art and neither Purism nor myself are technically or financially involved in these projects.
My only point here, is that the existence of very capable and professional free, libre creative tools helps the development of libre art creations by attracting more artists. In return, this great art helps making the tools as well as our philosophy of freedom more popular.
This is a virtuous circle in which we already stand. It is, in my opinion, an unstoppable and an exponential movement as the more popularity it gets, the more beautiful and enjoyable it is for everyone. This movement is very slowly (soon much faster) changing our world for the benefit of the people’s interest. One little free licensed creation at a time.
Here at purism we wish to contribute to this ethical world by building modern and powerful computers that are focused on their users freedom only.
In the world of freedom, competition is not discriminating and, as everything is made for the public interest, the success of one project is the success of everyone.
Thoughts ? Send them to feedback(at)shop.puri.sm
Still ready to switch to Free Software for your mulitmedia creations?
Let’s start with understanding the files and formats that we are going to manipulate in our workflow.
Codecs and Formats
When it comes to multimedia creation and publishing, using the right format, the right codec, converting, scaling, compressing, can be a real pain. Thankfully, in the world of Freedom, we have some of the best tools to help us manipulate media files and avoid a lot of frustration.
By the way, what’s a Format? And what’s a Codec?
You may wonder what’s the difference between a format and a codec? Well see it as the format being the container of the entire media file’s data representing both audio and video, and the codec being the way to encode and decode this data. The same format can hold data described by different codecs. Also a codec can be used with different formats.
As an example, the Matroska (.mkv) format can store either H264 or Theora encoded video and Opus or FLAC audio. Now the OGG format, can also hold Theora video and Opus or Vorbis audio… Is this making sense?
These are just examples but I have to admit that there are so many different formats and codecs that it is very difficult to see clear sometimes. What I suggest is to use only a few formats and codecs. The ones we really need. I will come back to this point in a future article.
The software that I use for digital media transcoding is FFmpeg.
FFmpeg is a command line based software that manipulates formats and codecs. Many free software already rely on FFmpeg so you may not need to ever use the commands directly but if you are comfortable with the terminal, FFmpeg can be very useful for quick conversions.
I will cover the basic usage of FFmpeg in this post.
If don’t like using the terminal or don’t easily remember commands (just like me), don’t worry, I will cover media conversions with a clean GUI, in a future post.
FFmpeg being very popular is pretty easy to install and should be directly available in the most common GNU/Linux distributions repositories.
When using PureOS, FFmpeg should be installed by default but if it is not the case, just use the following command in a terminal :
In order to get a list of formats supported by FFmpeg, open up a terminal window and type the following command :
And for a list of supported codecs, type the following command :
As you can see, the list is quite impressive! FFmpeg can manipulate the most common free and proprietary digital audio and visual formats.
Converting a video can be achieved by a simple command line :
This command converts a Quicktime .mov file to a .webm format with the default encoders (keeping the same scale, framerate and bitrate).
If you need to use a specific codec’s encoder for the chosen format just specify it in -vcodec (for video) and -acodec (for audio):
ffmpeg -i input.mov -vcodec libtheora -acodec libvorbis output.ogg
# to get a list of all encoders
In order to convert a video to an image sequence, you may do the following :
ffmpeg -i input.mov sequence/output_%05d.png
If you wish to rescale the picture of the video, you can use the -vf option (video filter) and set the “scale” value :
ffmpeg -i input.mov -vf scale=320:240 output.webm
# Rescales width keeping aspect ratio
ffmpeg -i input.mov -vf scale=320:-1 output.webm
# Doubles width keeping aspect ratio
ffmpeg -i input.mov -vf scale=scale=iw*2:ih output.webm
# Forces the image to fit into a 320×240 box
ffmpeg -i input.mov -vf scale=w=320:h=240:force_original_aspect_ratio=decrease output.webm
If you wish to force a constant bitrate to your video, use the -b option (use -b:v for the video and -b:a for the audio) :
ffmpeg -i input.mov -b:v 8000k -b:a 128k output.webm
You may also use -minrate and -maxrate to control the minimum and maximum bit rate tolerance (in bits/s)
These are the conversions that I use the most and represent a very short part of the real potential of FFmpeg. There are many more options and filters so if you want to know more about it, I suggest you browse the FFmpeg documentation.
Coming next, will be a tutorial on media conversion using the great GUI from Kdenlive. To complete this series of articles about media files manipulation, I will share with you the different free formats, codecs and params that I use in my entire video production workflow.
Stay tuned! 😉
One last thing…
As you were patient enough to read this article till the end, here is a little present. It is a script that I use to quickly convert short videos animations to animated GIFs.
This script generates a good quality GIF that is scaled down to 640px wide and ready to be embedded in any webpage. I found it in this excellent tutorial.
ffmpeg -v warning -i $1 -vf “$filters,palettegen” -y $palette
ffmpeg -v warning -i $1 -i $palette -lavfi “$filters [x]; [x][1:v] paletteuse” -y $2
To call this script, just use :
Happy free transcoding! 🙂
Thoughts? Send them to feedback(at)shop.puri.sm