A Few Words
This Software Development Cookbook is mostly a collection of notes I took from my own experience and many others that I collected over the years and still collecting.
This is disorganized and work in progress. Many things are probably outdated and need update. Someday I will find time and energy to clean this up and this may yield into one or more of finished books, who knows?
Project in github or gitlab
github
Create a new repository.
Create a new folder prj_dir
and README.md
file,
cd prj_dir
git init
git add .
git commit -m "first commit"
git remote add origin https://github.com/USERNAME/REPOSITORY.git
git push -u origin master
Push an existing folder maintained with git,
git remote add origin https://github.com/USERNAME/REPOSITORY.git
git push -u origin master
Note: vscode now provides gui interface to create a repository in github directly from a local repo with push of a button.
gitlab
Create a local project directory. In this example we will use prj_dir` as the directory name. Now populate the
prj_dir`` with all the files and folders that will be used for initial commit.
cd prj_dir
git init
git add .
git commit -m "initial commit"
git push --set-upstream https://gitlab.com/USERNAME/REPOSITORY.git master
git remote add origin https://gitlab.com/USERNAME/REPOSITORY.git
git pull
I created a convenience npm module that will execute the above commands without typing them individually. You can clone from https://gitlab.com/kkibria/gitlab.git, build the npm module and install.
Changing a remote's URL
The git remote set-url
command changes an existing remote repository URL.
$ git remote -v
> origin https://github.com/USERNAME/REPOSITORY1.git (fetch)
> origin https://github.com/USERNAME/REPOSITORY1.git (push)
# Change remote's URL,
$ git remote set-url origin https://github.com/USERNAME/REPOSITORY2.git
# Verify
$ git remote -v
> origin https://github.com/USERNAME/REPOSITORY2.git (fetch)
> origin https://github.com/USERNAME/REPOSITORY2.git (push)
Setting up your own git server
- https://medium.com/@kevalpatel2106/create-your-own-git-server-using-raspberry-pi-and-gitlab-f64475901a66
- Install self-managed GitLab. They have a version for raspberry pi.
Git Workflow
- https://musescore.org/en/handbook/developers-handbook/finding-your-way-around/git-workflow. Describes git workflow for their project, but a great page to consider for any project using git.
Remove tags
little python script will create a powershell command,
import subprocess
import re
proc = subprocess.Popen('git tag', stdout=subprocess.PIPE)
tags = proc.stdout.read().decode("utf-8").split()
file1 = open("tagremove.ps1","w")
for tag in tags:
found = re.match(r"^v\d+", tag)
if found:
continue
file1.write("git tag -d {tag}\ngit push --delete origin {tag}\n".format(tag=tag))
file1.close()
Autogenerate binaries using github action
Github action yaml file that builds binary on a tag push.
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Binary release
on:
push:
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
build-windows:
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install wxPython
pip install pyinstaller
- name: build executable
run: |
pyinstaller -F --add-data "./source/datafile.txt;." "./source/myapp.py"
- uses: actions/upload-artifact@v2
with:
name: myapp-windows
path: dist/
build-macos:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install wxPython
pip install pyinstaller
- name: build executable
run: |
pyinstaller -F --add-data "./source/datafile.txt:." "./source/myapp.py"
- uses: actions/upload-artifact@v2
with:
name: myapp-macos
path: dist/
build-ubuntu:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
sudo apt-get install build-essential libgtk-3-dev
URL=https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-20.04
python -m pip install --upgrade pip
pip install -U -f $URL wxPython
pip install pyinstaller
- name: build executable
run: |
pyinstaller -F --add-data "./source/datafile.txt:." "./source/myapp.py"
- uses: actions/upload-artifact@v2
with:
name: myapp-ubuntu
path: dist/
create-release:
needs: [build-windows, build-macos, build-ubuntu]
runs-on: windows-latest
steps:
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
- name: get windows artifact
uses: actions/download-artifact@v2
with:
name: myapp-windows
path: windows/
- uses: papeloto/action-zip@v1
with:
files: windows/
dest: myapp-windows-exe.zip
- name: Upload Windows Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: ./myapp-windows-exe.zip
asset_name: windows-exe.zip
asset_content_type: application/zip
- name: get macos artifact
uses: actions/download-artifact@v2
with:
name: myapp-macos
path: macos/
- uses: papeloto/action-zip@v1
with:
files: macos/
dest: myapp-macos-exe.zip
- name: Upload macos Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./myapp-macos-exe.zip
asset_name: macos-exe.zip
asset_content_type: application/zip
- name: get ubuntu artifact
uses: actions/download-artifact@v2
with:
name: myapp-ubuntu
path: ubuntu/
- uses: papeloto/action-zip@v1
with:
files: ubuntu/
dest: myapp-ubuntu-exe.zip
- name: Upload ubuntu Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./myapp-ubuntu-exe.zip
asset_name: ubuntu-exe.zip
asset_content_type: application/zip
Remove all local and remote tags
# Clear All local tags
git tag -d $(git tag -l)
# Fetch remote All tags
git fetch
# Delete All remote tags
git push origin --delete $(git tag -l)
# Clear All local tags again
git tag -d $(git tag -l)
create and push a specific tag
# create a tag
git tag test123
# list all local tags
git tag -l
# push a specific tag to remote named 'origin'
git push origin tag test123
Using vscode
Install vscode in ubuntu
First, update the packages index and install the dependencies by typing:
sudo apt update
sudo apt install software-properties-common apt-transport-https wget git
Next, import the Microsoft GPG key using the following wget command:
wget -q https://packages.microsoft.com/keys/microsoft.asc -O- | sudo apt-key add -
And enable the Visual Studio Code repository by typing:
sudo add-apt-repository "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main"
Once the apt repository is enabled, install the latest version of Visual Studio Code with:
sudo apt update
sudo apt install code
Visual Studio Code has been installed on your Ubuntu desktop and you can start using it. Next, ubuntu specific Setup credential cache so that you don't have to keep typing origin usercode and password.
git config --global credential.helper store
cpp setup
- https://github.com/Microsoft/vscode-cpptools/blob/master/Documentation/LanguageServer/MinGW.md
- https://youtu.be/dSGW-DLMnUc
To get include path gcc -v -E -x c++ -
Debugging
g++ -ggdb
To strip debugging symbol use -s option at release build.
g++ -ggdb -s
Vscode requires xterm, so install, sudo apt install xterm
Powershell setup
When powershell starts, it looks for startup script using the path
stored in the $profile
variable.
You can view and edit this file by typing code $profile
in the powershell.
Probably simplest strategy here is to look for a script in the project root
folder called .psrc.ps1
and if it exists, execute the script.
Add the following to the opened startup script,
$rc = ".psrc.ps1"
if (Test-Path -Path $rc -PathType Leaf) {
& $rc
}
This way you can put project specific startup commands in .psrc.ps1
.
One common usage of this is would be to add or modify path variable.
$env:Path = "SomeRandomPath"; (replaces existing path)
$env:Path += ";SomeRandomPath" (appends to existing path)
Hard wrap for editing comments
Check VS code to edit markdown files section to edit comments in your source files.
Using jupyter notebook in vscode
Check Connect a vscode notebook as client
Windows MSI installer
- https://docs.microsoft.com/en-us/windows/win32/msi/windows-installer-examples
- https://willpittman.net:8080/index.php?title=Msi
- https://willpittman.net:8080/index.php?title=Python_msilib_basics
wix tutorial
The wix tutorial explains lot of core concepts about msi. A good reading material.
- https://www.firegiant.com/wix/tutorial/
- https://weblogs.sqlteam.com/mladenp/tags/wix-windows-installer-xml-toolset/
- https://youtu.be/usOh3NQO9Ms
Jupyter Notebook
Running jupyter ipykernel locally
Create or open a new terminal for the project. Activate the virtual environment.
Get the jupyter installed
Install if you haven't already in your virtual environment.
pip install jupyter notebook
Start the server
python -m jupyter notebook --no-browser
This will start the server. It will print the url of the server with a random token, usually looks something like,
http://localhost:8888/tree?token=25e02d584db26b33f9171302057b32e19f6b32e6227b48d7
Copy the url.
Connect a vscode notebook as client
There is many benefits using vscode as a client. Editing source with intellisense is always beneficial. Follow the steps for vscode to connect to the server.
- Open the
.ipynb
file. This will ask you to select a kernel. Select anexisting kernel
. - It will ask for url, type url you copied, hit enter.
- Type a display name so that you can identify later, project name is a good idea, hit enter.
It will connect and now you can use it. For other .ipynb
files in the same project you can select the same display
name.
Debug mode
To run in debug mode,
python -m jupyter notebook --debug --no-browser
This will produce lot of messages to diagnose problems.
Custom visualization
Frontend code development
Code editing sandboxes
- https://codesandbox.io/.
- https://codepen.io/.
- https://jsfiddle.net/.
- https://replit.com/
- A review of sandboxes: 9 Best Online Code Editors for Web Applications.
CSS framework
- Tailwind uses a concept of utility functions as a building block of css style.
- Fun with Viewport Units.
CSS Articles
- A Complete Guide to CSS Functions.
- The Beauty of CSS.
- Absolute, Relative, Fixed Positioning: How Do They Differ?
D3.js: Javascript, SVG and CSS to create graphical widgets and animations
- Amelia Wattenberger's site has lot of interesting articles about frontend visuals.
- Creating a Gauge in React by Amelia.
- Data-Driven Documents d3js javascript library, Github source. Really awesome visual effects can be built using this library. d3 has bunch of cool features like force simulation, goe projections etc.
- D3 is svelte. I have forked this in my svelte REPL.
- D3 Force Graph - svg. I have forked this in my svelte REPL.
- Refer to an external SVG via D3 and/or javascript.
- How to use svg file for image source in D3.
- Web application that parses SVG files and returns d3.js code.
Common Javascript DOM recipes
Serve side rendering with PWA
- How to combine PWA and isomorphic rendering (SSR)?.
- How To Turn a Server-Side-Rendered React SPA Into a PWA
- SPAs, PWAs and SSR
- LinkedIn Lite: A Server-Side Rendered PWA
- Building a hybrid-rendered PWA
designing CSS
- Designing in the Browser
- javascript color library for css manipulation
- 1-Line Layouts, youtube 10 modern layouts in 1 line of CSS.
- https://css-tricks.com/snippets/css/a-guide-to-flexbox/
SVG
Canvas arts
polyfills
- Polyfills: everything you ever wanted to know, or maybe a bit less
- Loading Polyfills Only When Needed.
async/await
user authentication
Icons
dimensions
User interface look and feel
- https://developer.apple.com/design/human-interface-guidelines/
- https://material.io/
- COlor pallet for designers https://colorhunt.co/
- https://youtu.be/tClRHOnHveY
Markdown editor
Markdown renderer and editor in browser in frontend development could be useful for many applications for content authoring.
markdown-it
package
#install markdown-it
npm install markdown-it
#install markdown-it addons
npm install markdown-it-abbr markdown-it-container markdown-it-deflist markdown-it-emoji markdown-it-footnote markdown-it-ins markdown-it-mark markdown-it-sub markdown-it-sup
#install highlighter for markdown
npm install highlight.js
Setting up for markdown editing
Add a javascript file, for example myjs.js
as shown,
'use strict';
const MarkdownIt = require('markdown-it');
module.exports.mdHtml = new MarkdownIt()
.use(require('markdown-it-abbr'))
.use(require('markdown-it-container'), 'warning')
.use(require('markdown-it-deflist'))
.use(require('markdown-it-emoji'))
.use(require('markdown-it-footnote'))
.use(require('markdown-it-ins'))
.use(require('markdown-it-mark'))
.use(require('markdown-it-sub'))
.use(require('markdown-it-sup'));
Now we can wrap this javascript for browser and use it our html web app.
For instance in Svelte we can do the following,
<script>
import md from "./myjs.js";
let src = 'markdown content';
$:markdown = md.render(src);
};
</script>
<textarea bind:value={source} />
<div>{markdown}</div>
Synchronized scrolling
This is a rather interesting subject. This sample project I did, implements it using the scheme used in markdown-it demo. VS code uses something probably similar, but they have more feature. VS code source is worth exploring to learn more.
Every time the content is updated, the demo injects the line numbers in the
generated content using injectLineNumbers
. Next, buildScrollMap
builds a
map of line number versus position using a hidden element, sourceLikeDiv
.
This map is used by the following scroll handlers,
syncSrcScroll
: handler that monitors generated content scroll position and synchronizes the markdown source position.syncResultScroll
: handler that monitors markdown source content scroll position and synchronizes the generated content position.
Showdown.js
- Github Showdown.js source.
- Code highlighting (showdown highlight js extension)[https://stackoverflow.com/questions/21785658/showdown-highlightjs-extension]
- Github Showdown highlighter source,
- Highlight.js, a general purpose highlighter, https://highlightjs.org/, Github source.
- Check showdown extensions, Github. To develop a new extension take a look at their template at github. There are other extensions, google it.
- Showdown extension writeup, https://github.com/showdownjs/ng-showdown/wiki/Creating-an-Extension.
Showdown use
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>dev_cookbook</title>
<script src="https://unpkg.com/showdown/dist/showdown.min.js"></script>
</head>
<body>
<script>
var div = document.createElement("DIV");
document.body.appendChild(div); // Append <button> to <body>
var converter = new showdown.Converter();
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
div.innerHTML = converter.makeHtml(this.responseText);
}
};
xhttp.open("GET", "README.md", true);
xhttp.send();
</script>
</body>
</html>
Using a code editor for entering text
Instead of a text area to enter source, we an use a code editor.
- Dillinger is a good example, Github source. It also integrated server side pdf generation of markdown render.
- Dillinger uses Ace code editor, Github source. Ace allows highlighting code.
- highlight.js has Markdown syntax highlighting, integrating markdown highlighting might be a good idea.
Using Flexbox
This is quite nonsensical, need to rewrite and organize the whole thing.
https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_flexible_box_layout/Basic_concepts_of_flexbox describes it quite nicely just provide enough intro here so that by reading mdn page its easy to understand.
Div with elements
Most elements in HTML have either a display
of block
or inline
.
By default
block
elements take up the entire width of the screen. This results in
elements being forced to the next line. inline
makes them behave like text,
put them in the same line.
When we apply display flex
we leave this system behind and enter a new world
where things work differently.
The most important Concept in flexbox
are the axes along which items are
positioned the main axis and the cross axis. They control the flow of our
flexbox layout to position your elements you have to keep in mind both of these
axes.
Positioning Horizontally
To position your elements anywhere along the main axis you need the justify
content property this property has three basic values Flex start
is at the
start of the main axis this is the default value which is why our boxes are here
at the left side of our screen. Flex end
is at the end of the main axis this
will align the boxes on the right side and Center
is at the center of the main
axis so you can use these two lines of code display flex and justify content
Center to Center your elements horizontally.
Positioning Vertically
Now let's see how we can align elements vertically. in this case we need to consider the cross axis to do this let's increase the size of the flexbox layout by applying a Min height of 800 pixels in the body now we can see that this black border is basically the frame of our flexbox layout to Center your elements on the cross axis we use the Align items property this property can have the same values as justify content so you have Flex start for top positioning meaning at the start of the Cross axis X end for bottom positioning now they are down here and of course Center to put them in the center
How to Center a div
how to center a div you just need these three lines of code to Center elements along the main and cross axis since flexbox is flexible this layout will work even if we change the size of the layout the boxes will always be in the center no matter if we change the height and width of the body.
Justify and align
justify content and align items can have three additional and more complex values space between space around and space evenly space between will distribute the elements from left to right the first and last element will touch the edge of the layout depending on how big the container is the elements will have a bigger Gap in between space around Works similarly but here the first and last element do not touch the edge instead every element gets some space to the left and right as you can see even when I resize the entire website the spacing between the elements adjusts automatically but here we run into a slight inconsistency that the right space of the Box adds up with the left space of the second box it will result in these spaces being twice as big as the spaces that we have to the edges if you don't want that you can use space evenly here all the spaces are the same size and now finally the gaps are perfectly balanced as all things should be and of course all of these values are completely flexible they adjust perfectly when the size of the container changes and that's the whole point of using Flex box theoretically you could achieve the same thing centering a div and positioning your elements using only margins and padding but they are just not as flexible as a flexbox layout by the way you can use the same values on the Align items property as well but in this case it wouldn't make sense since we only have one row of content the boxes will all just be at the start of the Cross AIS but it is possible to use these values on the cross axis as you will learn later once the layouts get more complex for now let's learn about another interesting flexbox property which is flex direction to understand it easier let's remove align items and just use justify content Flex start for now Flex Direction the flex Direction property controls the direction of the main axis its default value is row which makes the main axis go from left to right you could also use row reverse to make it go from right to left looking at the numbers we can see that it starts counting from the right but this property also changed the way our justifi content value works now as the main axis goes from right to left the value Flex start is on the right side and flex end is on the left side you can also make the main axis go from top to bottom by applying Flex Direction column now the elements are no longer side by side but instead on top of each other since the direction of the main axis changed the direction of the Cross ax has changed as well now if you want to Center these boxes horizontally you no longer use Justified content but instead align items since justify content aligns the elements on the main axis which goes from top to bottom it doesn't make sense now to use it we need to use align items Center to Center them on the cross axis so I hope you understand that it is really important to remember the flex direction of your layout as this completely changes the way all the other properties work per default flexbox uses a Flex Direction row this will align the elements side by side but you're also going to want to change it to flex Direction column when you want your elements to be aligned vertically here is a very common use case scenario you want your entire website to stay the same everything is aligned vertically but you also want to Center everything horizontally then you would go to the body and apply display Flex Flex Direction column and and align items Center and with that every element will automatically be centered horizontally all right let's get back to Other Flexbox Properties our starting point display Flex there are a few more simple flexbox properties before it gets complicated the Gap property will create a gap between our items this property is very simple and you no longer need to use margins inside a flexbox layout for example let's give our boxes a gap of 20 pixels and as we can see now every box has a nice little Gap in between the flex wrap property can make your flexbox layout responsive using only one line of code you either have wrap or no wrap if you use flex wrap you will get a responsive layout that will align your items on the next line if we don't have enough space resizing the window will show how the elements flow to the next line when necessary if you use flex wrap no wrap that will not happen and instead the boxes will shrink together using a flex wrap of wrap will make the elements flow to the next line if we want this layout to be centered using justify content Center and align item Center we end up with this layout while the gap between the first line and the second line is so big will become clear once we have a few more boxes so let's say we have nine boxes in HTML just add a few boxes and number them properly then the flex box layout will have more line braks when when we use flx wrap the issue that we are running into is that we no longer have one main axis and one cross axis but every line has its own Main and cross axis in this case we have three of them the Align items property will only control the alignment along each individual cross axis Flex start puts the boxes at the start of each cross axis Flex end at the end and Center in the center but now we also need another CSS property to control the alignment of all the lines together for that we use align content align content has a default value of space around and this is why the gaps between the lines are so big we can also apply space between or space evenly to change that and of course we can also use the basic values Flex start Flex end and Center so the difference between align content and align items is align items will control the alignment along the cross axis of every flexbox line individually and align content will control the alignment of all lines together ultimately the perfectly centered layout in flexbox is when you Center all three of these properties and wrap the elements automatically when necessary justify content Center align item Center and align content Center by the way when we have a layout like this where are horizontal gaps and vertical gaps then you can actually split the Gap property into two different properties row Gap and column Gap you can use these to apply different values for the horizontal and vertical gaps let's say column Gap is 10 pixels and row Gap is 20 pixels this way we applied different values for the horizontal and vertical gaps but most likely you will not need that and just use the normal Gap property to control both at the same time these properties are not so important on flexbox but they will become interesting when you have more complex layouts in Grid later in the course by now you should have a good understanding of alignment and flexbox using these flexbox properties and their different combinations of values you can create any alignment possibility you want and of course if you don't know by now you can also use flexbox in every other HTML element as well it does not have to be the body for example a little thinking exercise for beginners if we want to Center the numbers inside the boxes which are currently at the top left corner how would you do that how can you Center the numbers inside the boxes pause the video and try it out yourself the answer is pretty straightforward we do the same thing as earlier but now Inside the Box selector display Flex to enable flexbox justify content center for the main axis and align item center for the cross AIS and this way we centered the numbers inside the boxes obviously you can use all the different combinations we talked about with black star and flex and here as well now let's move on to something very powerful in flexbox you don't always have to wrap elements in Flex box you can also resize them responsively so our next goal is this layout where the boxes grow and drink responsively to do that let's go back to our layout of five boxes in HTML so remove a few boxes so that we have five boxes again in CSS we remove any Flex box properties except display flex and gap of 10 pixels this is as simple as it gets now since we have no flex rep applied the boxes will be resized automatically if the viewport gets too small this Behavior can be specified using Flex shrink and interestingly this is something that you apply on the flex items not on the flex container inside the boxes we can apply a flex rink of zero for example this will prevent the shrinking of the boxes they can no longer be resized and will overflow the container this is also the case for every non-flex box layout when you have the problem of overflowing elements like this then consider using Flex box to either wrap the elements to with the next line or enable flexx rink with a value of one to make them shrink automatically but remember Flex rink of one is the default value you can imagine this property like a switch that you can turn on and off now when we have Flex rink applied we can see how the boxes resize automatically currently we are addressing every box the same way using their class so every box has the same Flex rink value but you can also use different Flex rink values on each item so in HTML I give the first box an ID box one I want to style this one differently when I address this box in CSS I could give this box a flex ring of zero for example and the other ones still have a flex ring of one this will result in this Behavior every element rings when necessary except the first one it will stay the same you could use this intentionally if you don't want specific elements to shrink for example if you don't want to distort an image or icon then it is pretty useful to disable Flex ring on that element now Flex rink is pretty FlexGrow useful but actually what I use way more often is its counterpart Flex grow Flex grow enables elements to grow meaning to stretch along the main axis its default value is zero so elements do not grow per default but once you enable it Flex grow one all the flex items try to fill out the empty space inside their parent element this means if there's more empty space the elements will be bigger they grow Flex grow tries to fill out all the available space inside the parent element and of course you can address specific elements and apply a different grow behavior let's say for the boxes the flex grow is zero so they cannot grow but box number one can grow and as you can see we end up with this effect where only the first element will grow I used this exact behavior when I developed the to-do application here the to-do text grows as much as possible while the checkbox the delete button and all the other elements should not be able to grow this is is a very useful technique to resize specific elements depending on the screen size so you will use it very often when making a website responsive Flex grow and flex rink are not only a Boolean that you can turn on and off but they also work as a multiplier you can assign even higher numbers than one they only Mak sense if they are compared with each other for example let's give the boxes a flex grow of one and the first box a flex grow of five now every element has the ability to grow but the first one does it five times faster this El element will be bigger than the others but every element can potentially grow so you can basically apply different grow values to control how much of the empty space they should fill this does not mean that this box is five times bigger as we can see per default they are all the same size but only when there's new empty space to fill out then this element will Reserve five times more of that new space than the other elements and the same thing can be done for Flex shrink now the first box shrinks five times faster than the other and of course you can address every element on its own and apply a different value for Flex grow and flex rank but I never had a situation where this specific feature was being used since most times you just want to turn on and off the grow and Shrink ability for the elements in a balanced way where they all behave the same way this whole system of flex grow and flex rink becomes even more powerful when you combine it with minimum and maximum sizes such as minwidth and Max width because then you can Define where the flex rink and flex grow should stop you can say my boxes should be able to grow Flex grow of one but they should not become bigger than 300 pixels each Max with 300 pixels now the elements grow grow grow grow grow up to the point where they reach the maximum size and the same for Flex Shrink Flex shrink they can shrink but only until they reach a minimum width of let's say 100 pixels this will make them shrink in the browser as expected but once they reach that size of 100 pixels the elements will overflow and of course overflowing elements is something that you should avoid but sometimes your elements should not get any smaller and need a minimum width to solve the overflowing problem remember that earlier we talked about Flex wrap we can't combine Flex rink and flex wrap by just using a media query normally we use flex rink and flex wrap is disabled with no wrap but once the screen size gets too small and the minwidth causes the element to overflow then we apply a flex wrap of wrap and the element wraps to the next line and this is a beautiful example of how easy it can be to make your website responsive you just use flex shrink and once you don't want it to shrink anymore then you use a flex AlignSelf wrap a special feature of flexbox that I don't really see that often being used is the Align self property this property works just as align items but you use it on the flex items that means let's say we have an align items of flex start and for some reason we want to isolate the first box and apply something else that is not Flex start a line self of flex end for example this will position only the first box at Flex end and everything else stays Flex start of course Center is also possible so you can use this property to align self one item differently on the cross axis but unfortunately I have not found a similar flexbox property for the main axis let's say we have a general Justified content of flex m and we want the first item to be Flex start it's only logical to assume that the property justify self works the same for the main axis but unfortunately it doesn't since justify self is actually a property that you use in CSS grid as you are going to learn later in the course if you want to achieve the desired layout where only the first element is on the left side and every other element is on the right then you would use the old school solution of margin right Auto this is a very useful technique for navigation bars here you want your company logo on the left side and everything else on the right side so just use margin right Auto Summary now we already know a lot about CSS flexbox layouts but before we dive into grid layouts let's summarize the key takeaways of flexbox you can position everything anywhere inside a flex container if you consider the main and cross axis and their properties justify content and align items you can wrap elements to the next line using Flex wrap of wrap you can also resize elements using Flex grow and flex rink and apply different values for each element if that is what you want combining all of that with minimum and maximum sizes and media queries you can already achieve most layouts that you have in mind but if you want to create more complex layouts or you just want to simplify your code then let me introduce CSS grid because there are actually things that are easier in Grid and work with even fewer lines of code here is a common situation how to Cent a div an experienced flexbox guy would say just use display Flex justify content Center and Aline items Center and that's fine but a grid layout guy would say hold on three lines of code I can do it in two display grid Place content center now if you want to learn CSS Grid in flexbox in a practical way by building Advanced projects like this learning page that uses modern flexbox and grid sections then I strongly recommend that you get our HTML and CSS complete course using the first link in the video description my name is Fabian and this was coding to go the channel where you the most relevant coding Concept in just a few minutes
Server side and/or Headless rendering
Rendering JS
- https://developers.google.com/web/tools/puppeteer/articles/ssr
- https://medium.com/swlh/video-export-from-p5-js-sketch-1b9b6287801a
- https://github.com/TrevorSundberg/h264-mp4-encoder
- https://github.com/ffmpegwasm/ffmpeg.wasm
- https://stackoverflow.com/questions/62863547/save-canvas-data-as-mp4-javascript
Firebase matters
Firebase Auth sample
- YouTube Flutter Web - Firebase Authentication for your web apps. Github link used in this video.
Firebase Auth articles
- Cross-Origin Resource Sharing (CORS) article, Do you really know CORS?.
- Using function api- How to Build a Role-based API with Firebase Authentication, sources in github.
- Controlling Data Access Using Firebase Auth Custom Claims (Firecasts)
Email link sign in
- Article Firebase Email Link Authentication.
- Article Working with Firebase Dynamic links.
- We have to whitelist dynamic link domain, article Firebase says “Domain not whitelisted” for a link that is whitelisted
Google sign in
Enable the google sign-in in the authentication tab in firebase console for the project. In the enable dialog, expand the web SDK config.
Copy the Web client ID and save setting. Lets say this value is somerandomstuff.apps.googleusercontent.com
. Now copy the client ID value into the web/index.html
file in a meta tag.
<head>
...
<meta name="google-signin-client_id" content="somerandomstuff.apps.googleusercontent.com" />
...
<title>my awesome pwa app</title>
<link rel="manifest" href="/manifest.json">
...
</head>
Stack Overflow
- Google api problem Firebase: 403 PERMISSION_DENIED
Firebase security videos
- Security Rules
- Firebase Database Rules Tutorial
- Youtube Firestore Security Rules - How to Hack a Firebase App
- Firestore Rules Testing with the Emulator - New Feature
- Security Rules! 🔑 | Get to Know Cloud Firestore #6
Firebase database rule generator
Cloud Firestore rule generator
Firestore
firestore rules common functions
service cloud.firestore {
match /databases/{database}/documents {
function isSignedIn() {
return request.auth != null;
}
function emailVerified() {
return request.auth.token.email_verified;
}
function userExists() {
return exists(/databases/$(database)/documents/users/$(request.auth.uid));
}
// [READ] Data that exists on the Firestore document
function existingData() {
return resource.data;
}
// [WRITE] Data that is sent to a Firestore document
function incomingData() {
return request.resource.data;
}
// Does the logged-in user match the requested userId?
function isUser(userId) {
return request.auth.uid == userId;
}
// Fetch a user from Firestore
function getUserData() {
return get(/databases/$(database)/documents/accounts/$(request.auth.uid)).data
}
// Fetch a user-specific field from Firestore
function userEmail(userId) {
return get(/databases/$(database)/documents/users/$(userId)).data.email;
}
// example application for functions
match /orders/{orderId} {
allow create: if isSignedIn() && emailVerified() && isUser(incomingData().userId);
allow read, list, update, delete: if isSignedIn() && isUser(existingData().userId);
}
}
}
firestore rules data validation
function isValidProduct() {
return incomingData().price > 10 &&
incomingData().name.size() < 50 &&
incomingData().category in ['widgets', 'things'] &&
existingData().locked == false &&
getUserData().admin == true
}
Firestore matters
Firestore security
Cloud firestore is a database for a serverless architecture Firebase uses. Followings are notes I took from watching a google firebase team provided youtube playlist Get to know Cloud Firestore.
Notes
Document contains tree structure of information. But document does not contain another document. Maximum size ie 1 meg. Documents can point to a sub-collection.
document = {
bird_type: "swallow",
airspeed: 42733,
coconut_capacity: 0.62,
isNative: false,
icon: <binary data>,
vector: {
x: 36.4255,
y: 25.1442,
z: 18.8816
} distances_traveled: [
42,
39,
12,
421
]
}
Collections only contains documents
collection = {
hash1: document1
hash2: document2
}
At rhe root we will have a collection, so path to a document may look like,
var messageRef = firestore.collection('rooms').doc('roomA').collection('messages').doc('message1');
Each level fragment pattern will come in pair .collection(...).doc(...)
.
Every field in a document is automatically indexed by Firestore. Depending on the search we may have create composite indexes if a search fails. Firestore will send a link to console which can be used to create exact composite index that is needed for that specific search.
Note, we will have to copy the composite index it back to our firestore project so that it will be pushed when we deploy next time.
Question: does the firestore emulator do the same? Answer No. Emulator will automatically build the indexes if it is missing. there is no way currently to figure out what are the indexes it built, although google might add it in the future.
General rules
- Documents have limits.
- You can't retrieve a partial document.
- Queries are shallow.
- You're billed by the number of reads and writes you perform.
- Queries find documents in collections.
- Arrays are weird.
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
// Completely locked
allow read, write: if false;
}
}
}
service cloud.f1restore {
match /databases/{database}/documents {
match /restaurants/{restaurantID} {
// restaurantID could be resturant_123
}
match /restaurants/{restaurantID}/reviews/{reviewID} {
}
}
}
wildcard
service cloud.f1restore {
match /databases/{database}/documents {
match /restaurants/{restOfPath=**} {
// restOfPath could be resturant_123
// or could be resturant_123/reviews/review_456
}
match /restaurants/{restaurantID}/reviews/{reviewID} {
}
}
}
service cloud.f1restore {
match /databases/{database}/documents {
match/restaurants/{restaurantID} {
// restaurantID = The ID of the restaurant doc is available at this level
match /reviews/{reviewID} {
// restaurantID = The ID of the restaurant doc available at this level
// reviewID = The ID of the review doc is also available at this level
allow write: if reviewID == "review_234"
}
// private data that only helps queries not to be sent back to client
match /private-data/{privateDoc} {
}
}
}
}
service cloud.f1restore {
match /databases/{database}/documents {
match /{document=**} {
//DO NOT LEAVE THIS IN, this will override everything else, RULES are OR-ED
allow read, write;
}
match /users/{rest0fPath=**} {
allow read;
}
match /users/{userID}/privateData/{privateDoc} {
//Doesn't do anything
allow read: if false;
}
}
}
read Control Access with Custom Claims and Security Rules to enable additional fields in Auth token.
access rule checks
// logged in users only
service cloud.f1restore {
match /databases/{database}/documents {
match /myCollection/{docID} {
allow read: if request.auth!= null;
}
}
}
// logged in users with google email
service cloud.f1restore {
match /databases/{database}/documents {
match /myCollection/{docID} {
allow read: if request.auth.token.email.matches('.*google[.]com$');
}
}
}
// logged in users with google email and email verified
service cloud.f1restore {
match /databases/{database}/documents {
match /myCollection/{docID} {
allow read: if request.auth.token.email.matches('.*google[.]com$') &&
request.auth.token.email_verified;
}
}
}
// only logged in as albert
service cloud.f1restore {
match /databases/{database}/documents {
match /myCollection/{docID} {
allow read: if request.auth.uid == "albert_245";
}
}
}
// role based access using private collection
allow update: if get(/databases/$(database)/documents/restaurants/$(restaurantID)/private_data/private).data.roles[request.auth.uid] == "editor";
// multiple role access using private collection
allow update: if get(/databases/$(database)/documents/restaurants/$(restaurantID)/private_data/private).data.roles[request.auth.uid] in ["editor", "owner"];
using functions
// logged in users with google email and email verified
service cloud.f1restore {
match /databases/{database}/documents {
match /myCollection/{docID} {
function doesUserHaveGoogleAccuunt() {
return request.auth.token.email.matches(
'.*google[.]com$')
&& request.auth.token.email_verified;
}
allow read: if doesUserHaveGoogleAccuunt();
}
}
}
or
service cloud.f1restore {
match /databases/{database}/documents {
function userIsRestaurantEditor(restaurantID) {
return get(/databases/$(database)/documents/restaurants/$(restaurantID)/private_data/private)
.data.roles[request.auth.uid] in ["editor", "owner"];
}
match /restaurants/{restaurantID} {
// restaurantID = The ID of the restaurant doc is available at this level
allow update: if userIsRestaurantEditor(restaurantID);
match /reviews/{reviewID} {
// restaurantID = The ID of the restaurant doc available at this level
// reviewID = The ID of the review doc is also available at this level
}
// private data that only helps queries not to be sent back to client
match /private-data/{privateDoc} {
}
}
}
}
To test rules use cloud firestore emulator.
pagination needs to be used save data transfer cost
// first page
myQuery = restaurantRef
.whereFie1d("city", isEqualTo: "Tokyo")
.whereFie1d("category", isEqualTo: "tempura")
.order(by: "rating”, descending: true)
.limit(to: 20)
// next page
myQuery = restaurantRef
.whereFie1d("city", isEqualTo: "Tokyo")
.whereFie1d("category", isEqualTo: "tempura")
.order(by: "rating”, descending: true)
.limit(to: 20)
.start(after: [”Tokyo", “tempura", 4.9)
// or a simpler way,
myQuery = myQuery.start(after: [”Tokyo", “tempura", 4.9])
// even easier
myQuery = myQuery.start(after: previousDoc)
dont use .offset() because it will still bill you for the skipped data.
Cross Platform Apps
The traditional way is using a multi-platform UI library. GTK4 is an example for such library from gnome project. However it requires lot of coding to provide plumbing between different components.
However, another most convenient way to build cross platform apps is to use web technology for the frontend user interface. This a well understood technology and as such the gui can be built with standard web page design tooling.
Most popular such app design environment is electron, which is chromium based. the backend of electron app is node. The problem with electron is distributable bundle size. The are quite large, because bundle includes node, and chromium. In order to reduce bundle size, new similar environments are using
- Webview, instead of chromium. Desktops distros are equipped with webview.
- Faster and lighter compiled binary backend.
As a result, the bundles are orders of magnitude smaller, more secure compared to Electron.
One such environment is Tauri, which rust based. Here, I will record my notes as I try to build a Tauri app, with svelte frontend.
Electron Python
Tauri
Installing
- Install rust. Run the installer from rust site.
- Install tauri. Follow instructions from [tauri] (https://tauri.app/v1/guides/getting-started/setup/sveltekit/) site.
cargo install create-tauri-app
# cargo create-tauri-app
Install tauri cli
cargo install tauri-cli
Create the project
npm create svelte@latest tauri-svelte
cd tauri-svelte
git init
# add tauri to project
cargo tauri init
npm install @tauri-apps/api
Run in dev mode
cargo tauri dev
Build the production version
cargo tauri build
Use web rendering engine
WebRender is a GPU-based 2D HTML rendering engine written in Rust. See https://github.com/servo/webrender
Either we can use a javascript engine similar to a web browser, or build the UI functionality in pure rust.
GTK4
Svelte
General capabilities
create a starter svelte project
npx degit sveltejs/template my-svelte-project
cd my-svelte-project
npm install
npm run dev
vscode setting
Command Palette (⇧⌘P) then: Preferences: Configure Language Specific Settings,
{
"[svelte]": {
"editor.defaultFormatter": "svelte.svelte-vscode",
"editor.tabSize": 2
}
}
Svelte Browser compatibility
Svelte components
- https://github.com/hperrin/svelte-material-ui
- https://github.com/collardeau/svelte-fluid-header
- https://flaviocopes.com/svelte-state-management/
Svelte Component exported functions
Comp1.svelte
file.
<script>
import Comp2, {mod_param as C2param} from 'Comp2.svelte';
console.log(C2param.someText);
C2param.someFunc("Comp2 function called from Comp1");
</script>
Comp2.svelte
file.
<script>
// This is component instance context
...
</script>
<script context="module">
// This is component module context
export const mod_param = {
someFunc: (text) => {
console.log(text);
},
someText: "hello from Comp2"
}
</script>
Exchanging data between components module and instance context is tricky So appropriate handling required in such case. It is best to use REPL sandbox
JS output
window to check the exact use case.
Component lifecycle
Components,
- are instantiated and composed inside parents.
- are initialized by parents via parameters. Parameters are passed by value.
- display information and interact with user.
- manage user interaction via state transitions.
- make the user entered information accessible to parents via bindings. Binding keeps parent's variable in sync with components variable. Binding makes the parameter behave like as if they were passed by pointers.
- send events to parent so that parent can take actions. Parent uses event handlers.
- are destroyed by parents when they are no longer needed.
A parent can get component state both via binding and events. Bindings provide easy reactive update in parents. Events provides an easy way when algorithmic action is required by parent. Using both as appropriate is the best approach.
Check Managing state in Svelte.
Passing parameter to sub-component
Sub-components act like functions javascript function with parameters, pass parameters individually or all the parameters as a dictionary using a spread operator. You can have default values inside the sub-component.
<Comp1 param1={data} />
<Comp2 {...selprops} />
Using spread operator is preferred for large number of parameters.
Check Passing parameters to components.
Binding
Binding is exactly same as passing individual parameters, except you have to attach the
bind
keyword in the association.
<Comp1 bind:param1={data} />
Binding will keep component parameter param1
and parent variable
data
always in sync. When components updates param1
it will be
immediately reflected in data
. Parent can bind data
with multiple
components and they all will be in sync as well.
There is no spread style binding syntax supported.
There is a short hand syntax available for binding in the case when parameter name and variable name are the same.
<Comp1 bind:same_name={same_name} />
<!-- short hand syntax for above -->
<Comp1 bind:same_name />
Check component bindings.
Events
Parent.svelte
source,
<script>
import Comp1 from './Comp1.svelte';
function handleMessage(event) {
alert(event.detail.text);
}
</script>
<Comp1 on:event_name={handleMessage}/>
Comp1.svelte
source,
<script>
import { createEventDispatcher } from 'svelte';
const dispatch = createEventDispatcher();
function sayHello() {
dispatch('event_name', {
text: 'Hello!'
});
}
</script>
<button on:click={sayHello}>
Click to say hello
</button>
To bubble up component message parent can forward it further up, see event forwarding.
develop reusable components
using REPL, Github and npm
todo: write up
Generating web components with svelte
Styling
- What I Like About Writing Styles with Svelte
- The zen of Just Writing CSS
- https://css-tricks.com/what-i-like-about-writing-styles-with-svelte/
- With tailwind we can import external css file into a component, Although it is not specific for svelte, still is a good read: Add Imports to Tailwind CSS with PostCSS.
- css in js for svelte, https://svelte.dev/blog/svelte-css-in-js
Generated content and global styling in production build
Svelte allows css support of generated content with global styling. However
it (with postcss and purgecss) removes any css that is not being used by
the static content during production build, if it is loaded from a css file
using postcss-include facility even it is a global style. During
development build this doesn't happen though since we don't run purgecss in
development build. To ensure those styles are retained we need to tell
purgecss about those. For instance if we used a highlighter such as
highlight.js which prepends highlight classes with hljs-
prefix. Then
we can retain styling for them by adding a whitelist pattern /hljs-/
in postcss.config.js
the same way as it is done for svelte in their
official svelte template.
const purgecss = require('@fullhuman/postcss-purgecss')({
content: ['./src/**/*.svelte', './src/**/*.html'],
whitelistPatterns: [/svelte-/, /hljs-/],
defaultExtractor: content => content.match(/[A-Za-z0-9-_:/]+/g) || []
})
Transitions, Animations
- https://svelte.dev/repl/865750b1ffb642f59d317747bd9f3534?version=3.4.4
- https://stackoverflow.com/questions/56453366/cant-use-svelte-animate-to-make-a-list-item-fly-into-a-header
- https://svelte.dev/repl/f4386ec88df34e3b9a6b513e19374824?version=3.4.4 for moving selected item to a position.
state management
- https://stackoverflow.com/questions/65092054/how-to-use-svelte-store-with-tree-like-nested-object
- https://www.newline.co/@kchan/state-management-with-svelte-props-part-1--73a26f45
- https://medium.com/@veeralpatel/things-ive-learned-about-state-management-for-react-apps-174b8bde87fb
- https://svelte-recipes.netlify.app/stores/
- https://mobx.js.org/getting-started
- https://blog.logrocket.com/application-state-management-with-svelte/
Contexts vs. stores
Contexts and stores seem similar. They differ in that stores are available to any part of an app, while a context is only available to a component and its descendants. This can be helpful if you want to use several instances of a component without the state of one interfering with the state of the others.
In fact, you might use the two together. Since context is not reactive and not mutable, values that change over time should be represented as stores:
const { these, are, stores } = getContext(...);
Svelte markdown editor
- Build Markdown editor using Svelte in 10 minutes, this uses marked.js parser, github.
- Stackedit is a vue markdown editor, but provides scroll sync. We can take a look at the source code and do similar thing in Svelte. Uses https://github.com/markdown-it/markdown-it. markdown-it has most versatile collection of plugins.
- Nice scrollbar sync example, https://github.com/vincentcn/markdown-scroll-sync.
- Another scrollbar sync example, https://github.com/jonschlinkert/remarkable. Look in the demo directory.
- The most promising markdown editing seems to be
markdown-it
,vscode
uses this for their markdown support as well. This seems to be a project which evolved fromremarkable
. Check Github https://github.com/markdown-it/markdown-it in thesupport/demo_template
directory for the scroll syncing javascript source. With some modification this can be integrated with svelte. I like the way vscode does it, whenever the cursor is on a line, it finds the whole element in the preview window and draws a side bar to indicate the element being edited. I was able to integrate the relevant part of the code fromsupport/demo_template
with svelte, read more.
using processing p5.js in svelte
- p5-Svelte: a quick and easy way to use p5 in Svelte!, Github
- Q&A #6: p5.js Sketch as Background
- https://p5js.org/reference/#/p5/loadImage
Routing
- Client-side (SPA) routing in Svelte
- Micro client-side router inspired by the Express router
- Svelte routing with page.js, Part 1
- Svelte routing with page.js, Part 2, code in https://github.com/iljoo/svelte-pagejs.
- Setting up Routing In Svelte with Page.js
- Svelte with firebase and page.js router- I have built a template, work in progress: https://github.com/kkibria/svelte-page-markdown.
SPA & Search Engine Optimization (SEO)
- SPA SEO: A Single-Page App Guide to Google’s 1st Page
- Why Single Page Application Views Should be Hydrated on the Client, Not the Server
Svelte with firebase
- svelte
- Rich Harris - Rethinking reactivity
- Svelte 3 Reaction & QuickStart Tutorial
- Svelte + Firebase = Sveltefire (and it is FIRE 🔥🔥🔥)
- Svelte Realtime Todo List with Firebase. I built a template using this in https://github.com/kkibria/svelte-todo.
- Uses firebase auth web UI Sapper/Svelte Firebase Auth UI, github source code.
- Firebase storage with svelte - RxFire in Svelte 3 using Firebase Firestore and Authentication
Login data passing with context API
Firebase can have has it own login subscription via rxfire. However following articles are good read to understand svelte support data sharing across the app.
Web components
A Svelte component can be compiled into a web component as documented in Custom element API, read this carefully.
At the moment this is an incomplete feature. There are many issues as described in the following parts of this section. If you are not dependent on them (kebab casing and global: css) you can make web components.
There should be two ways we can interact with web component technology,
- Instantiate web component in a svelte component. Check this as a starting point, https://youtu.be/-6Hy3MHfPhA.
- instantiate a svelte component in a web component or a existing web page like a web component.
Compile
To convert your existing Svelte app into a web component, you need to,
- Add <svelte:options tag="component-name"> somewhere in your .svelte file to assign a html tag name.
- Tell the svelte compiler to create web component by adding
customElement: true
option. However using this way would turn every svelte file it compiles to a web component. In a typical project however you might one only the top svelte file compiled to be a web component and all other files to integrated in the top level file. This can be done usingcustomElement
option with a filter.
Following is a filter example that only makes web component when the file has .wc.svelte
extension.
// rollup.config.js
svelte({ customElement: true, include: /\.wc\.svelte$/ }),
svelte({ customElement: false, exclude: /\.wc\.svelte$/ }),
Global CSS file
Using global CSS from a file should be done carefully if they are not scoped properly. They should be avoided inside a component, all styles should be defined inside the Svelte components so that svelte compiler scope them properly. Using class name to scope CSS is the right approach if needed. For web components svelte will inline the CSS. Note that, CSS with global: specifier for generated content is needed since svelte will remove any unused CSS otherwise. But this does not work in Custom element right now.
A workaround proposed is to use webpack ContentReplacePlugin
,
plugins: [
new ContentReplacePlugin({
rules: {
'**/*.js': content => content.replace(/:global\(([\[\]\(\)\-\.\:\*\w]+)\)/g, '$1')
}
})
]
A side note, CSS doesn't leak thru shadow DOM which is used for web components.
Property name
The property names should be restricted to lower case, camel case and kebab case should not be used as they have complications. However if you must, check https://github.com/sveltejs/svelte/issues/3852. The solution proposed is to create a wrapper,
import MyCustomComponent from './MyCustomComponent.svelte';
class MyCustomComponentWrapper extends MyCustomComponent {
static get observedAttributes() {
return (super.observedAttributes || []).map(attr => attr.replace(/([a-zA-Z])(?=[A-Z])/g, '$1-').toLowerCase());
}
attributeChangedCallback(attrName, oldValue, newValue) {
attrName = attrName.replace(/-([a-z])/g, (_, up) => up.toUpperCase());
super.attributeChangedCallback(attrName, oldValue, newValue);
}
}
customElements.define('my-custom-component', MyCustomComponentWrapper);
MyCustomComponent.svelte
<script>
export let someDashProperty;
</script>
<svelte:options tag={null} />
{someDashProperty}
Then you can use it in this way:
<my-custom-component some-dash-property="hello"></my-custom-component>
There were variants of this can be used in the bundler to have a automated wrapper injection done at the build time.
If you use esbuild bundler instead of rollup, following would work.
Original Svelte component like this:
<!-- src/components/navbar/navbar.wc.svelte -->
<svelte:options tag="elect-navbar" />
<!-- Svelte Component ... -->
Create a customElements.define
/* src/utils/custom-element.js */
export const customElements = {
define: (tagName, CustomElement) => {
class CustomElementWrapper extends CustomElement {
static get observedAttributes() {
return (super.observedAttributes || []).map((attr) =>
attr.replace(/([a-zA-Z])(?=[A-Z])/g, '$1-').toLowerCase(),
);
}
attributeChangedCallback(attrName, oldValue, newValue) {
super.attributeChangedCallback(
attrName.replace(/-([a-z])/g, (_, up) => up.toUpperCase()),
oldValue,
newValue === '' ? true : newValue, // [Tweaked] Value of omitted value attribute will be true
);
}
}
window.customElements.define(tagName, CustomElementWrapper); // <--- Call the actual customElements.define with our wrapper
},
};
Then use esbuild inject option to inject the above code to the top of the built file
/* esbuild.js */
import { build } from 'esbuild';
import esbuildSvelte from 'esbuild-svelte';
import sveltePreprocess from 'svelte-preprocess';
// ...
build({
entryPoints,
ourdir,
bundle: true,
inject: ['src/utils/custom-element.js'], // <--- Inject our custom elements mock
plugins: [
esbuildSvelte({
preprocess: [sveltePreprocess()],
compileOptions: { customElement: true },
}),
],
})
// ...
Will produce web component like this:
// components/navbar.js
(() => {
// src/utils/custom-element.js
var customElements = {
define: (tagName, CustomElement) => {
// Our mocked customElements.define logic ...
}
};
// Svelte compiled code ...
customElements.define("elect-navbar", Navbar_wc); // <--- This code compiled by Svelte will called our mocked function instead of actual customElements.define
var navbar_wc_default = Navbar_wc;
})();
- Can You Build Web Components With Svelte?, slightly old, some of the issues have been fixed since then. A must read to understand related issues.
Todo: We will explore this in more details in future.
Events
Read MDN events article.
The read-only composed
property of the Event interface returns a Boolean which indicates
whether or not the event will propagate across the shadow DOM boundary into the standard DOM.
// App.svelte
<svelte:options tag="my-component" />
<script>
import { createEventDispatcher } from 'svelte';
const dispatch = createEventDispatcher();
</script>
<button on:click="{() => dispatch('foo', {detail: 'bar', composed: true})}">
click me
</button>
Develop with Vite
Vite provides very nice response time during development cycle, compile is really fast with vite as it doesn't bundle at development time. For production build it uses rollup as usual with svelte and bundles everything together. It also has a nice plugin for kebab casing support for svelte component property names.
I am not sure if we make a web component with this plugin, the web component will support kebab casing. But using web component with kebab casing in Svelte can be done with this.
Svelte Desktop and mobile app
svelte for desktop app
- Build a desktop app with Electron and Svelte, github
- Getting started with Electron and Svelte, read the discussion in this article for problems and solutions.
web apps & mobile apps
- https://dev.to/ruppysuppy/turn-your-website-into-a-cross-platform-desktop-app-with-less-than-15-lines-of-code-with-electron-44m3
- https://www.webtips.dev/how-to-make-your-very-first-desktop-app-with-electron-and-svelte
- https://dev.to/khangnd/build-a-desktop-app-with-electron-and-svelte-44dp
- https://fireship.io/snippets/svelte-electron-setup/
Svelete and capacitor will allow web apps to become mobile apps
- https://ionicframework.com
- https://capacitorjs.com/
- https://stackoverflow.com/questions/58611710/how-to-setup-svelte-js-with-ionic
- https://www.joshmorony.com/using-the-capacitor-filesystem-api-to-store-photos/
- https://gist.github.com/dalezak/a6b1de39091f4ace220695d72717ac71
local file loading in electron
- Electron should be able to load local resources with enabled webSecurity
- https://www.electronjs.org/docs/api/protocol#protocolregisterfileprotocolscheme-handler-completion
electron app security
Getting error doing electron dialog because fs and ipcRender can not be used in browser thread securely.
-
Error while importing electron in browser, import { ipcRenderer } from 'electron'
-
read https://www.electronjs.org/docs/latest/tutorial/process-model to see how selected node environment apis can be made available to renderer process via contextBridge.
-
also see https://stackoverflow.com/questions/44391448/electron-require-is-not-defined/59888788#59888788.
-
building secure electron app, https://github.com/reZach/secure-electron-template/blob/master/docs/secureapps.md
Node.js and frontend interaction in electron
there are two choices,
- server backend, chromium frontend communicating over tcpip port using standard web technique
- browser-node interaction via IPC. We can promisify this. Following is the suggested code from this gist.
main-ipc.ts
import { ipcMain, BrowserWindow, Event } from 'electron'
const getResponseChannels = (channel:string) => ({
sendChannel: `%app-send-channel-${channel}`,
dataChannel: `%app-response-data-channel-${channel}`,
errorChannel: `%app-response-error-channel-${channel}`
})
const getRendererResponseChannels = (windowId: number, channel: string) => ({
sendChannel: `%app-send-channel-${windowId}-${channel}`,
dataChannel: `%app-response-data-channel-${windowId}-${channel}`,
errorChannel: `%app-response-error-channel-${windowId}-${channel}`
})
export default class ipc {
static callRenderer(window: BrowserWindow, channel: string, data: object) {
return new Promise((resolve, reject) => {
const { sendChannel, dataChannel, errorChannel } = getRendererResponseChannels(window.id, channel)
const cleanup = () => {
ipcMain.removeAllListeners(dataChannel)
ipcMain.removeAllListeners(errorChannel)
}
ipcMain.on(dataChannel, (_: Event, result: object) => {
cleanup()
resolve(result)
})
ipcMain.on(errorChannel, (_: Event, error: object) => {
cleanup()
reject(error)
})
if (window.webContents) {
window.webContents.send(sendChannel, data)
}
})
}
static answerRenderer(channel: string, callback: Function) {
const { sendChannel, dataChannel, errorChannel } = getResponseChannels(channel)
ipcMain.on(sendChannel, async (event: Event, data: object) => {
const window = BrowserWindow.fromWebContents(event.sender)
const send = (channel: string, data: object) => {
if (!(window && window.isDestroyed())) {
event.sender.send(channel, data)
}
}
try {
send(dataChannel, await callback(data, window))
} catch (error) {
send(errorChannel, error)
}
})
}
static sendToRenderers(channel: string, data: object) {
for (const window of BrowserWindow.getAllWindows()) {
if (window.webContents) {
window.webContents.send(channel, data)
}
}
}
}
renderer-ipc.ts
import { ipcRenderer, remote, Event } from 'electron';
const getResponseChannels = (channel: string) => ({
sendChannel: `%app-send-channel-${channel}`,
dataChannel: `%app-response-data-channel-${channel}`,
errorChannel: `%app-response-error-channel-${channel}`
})
const getRendererResponseChannels = (windowId: number, channel: string) => ({
sendChannel: `%app-send-channel-${windowId}-${channel}`,
dataChannel: `%app-response-data-channel-${windowId}-${channel}`,
errorChannel: `%app-response-error-channel-${windowId}-${channel}`
})
export default class ipc {
static callMain(channel: string, data: object) {
return new Promise((resolve, reject) => {
const { sendChannel, dataChannel, errorChannel } = getResponseChannels(channel)
const cleanup = () => {
ipcRenderer.removeAllListeners(dataChannel)
ipcRenderer.removeAllListeners(errorChannel)
}
ipcRenderer.on(dataChannel, (_: Event, result: object) => {
cleanup()
resolve(result)
})
ipcRenderer.on(errorChannel, (_: Event, error: object) => {
cleanup()
reject(error)
})
ipcRenderer.send(sendChannel, data)
})
}
static answerMain(channel: string, callback: Function) {
const window = remote.getCurrentWindow()
const { sendChannel, dataChannel, errorChannel } = getRendererResponseChannels(window.id, channel)
ipcRenderer.on(sendChannel, async (_: Event, data: object) => {
try {
ipcRenderer.send(dataChannel, await callback(data))
} catch (err) {
ipcRenderer.send(errorChannel, err)
}
})
}
}
Sveltekit
TODO: check server side rendering.
Sapper, a server side framework with Svelte
Sapper is officially dead. It has been replaced by sveltekit.
TODO: The firebase related material here, needs to be moved elsewhere. We will eventully remove this section.
Sapper
- sapper
- Building a project with Sapper, a JavaScript app framework
- Simple Svelte 3 app with router
- Exploring Sapper + Svelte: A quick tutorial
- Build a fast reactive blog with Svelte and Sapper
Sapper with firebase
- How to host a Sapper.js SSR app on Firebase
- Sapper - Deploy to Firebase Cloud Functions
- migrating to Sapper part 1 - SEO, Twitter Cards, OpenGraph
- migrating to Sapper part 2 - TDD with Cypress.io
- migrating to Sapper part 3 - RSS feed
- migrating to Sapper part 2 bis - Netlify, GitHub Actions with Cypress.io
- https://github.com/fusionstrings/firebase-functions-sapper
A recipe to be tried for firebase
from https://dev.to/nedwards/comment/h1l7 ...snip -->
Thanks very much for breaking this down.
I also tried following this video, https://youtu.be/fxfFMn4VMpQ and found the workflow a bit more manageable when creating a Firebase project. Sources from video https://github.com/sveltecasts/006-deploy-sapper-to-firebase-cloud-functions.
First, and then adding in a new Sapper project. I got through it without any issues, on Windows. Would have much preferred it to be a write-up like yours though, so here's a summary:
Create a new empty folder, then navigate to it in the VS Code terminal.
firebase init
- functions
- hosting
- use an existing project you've created on Firebase
- no ESLint
- don't install dependencies now
- public directory: functions/static (Sapper project will go into functions folder)
- SPA: no
Move or rename package.json
, and delete .gitignore
(sapper will
create these for us later)
cd functions
npx degit sveltejs/sapper-template#rollup --force
Copy contents of scripts
block (Firebase commands) from old package.json
into scripts
block of new package.json
that was generated by Sapper.
Rename Firebase's start
command to fb_start
.
Copy entire engines
block from old to new package.json
, and change
node version to 10.
Copy over contents of dependencies
and devDependencies
blocks.
Delete old package.json
, once all Firebase stuff is moved over, and
save the new Sapper one.
Remove polka from dependencies in package.json
.
npm install --save express
npm install
server.js
:
- import express instead of polka
- change function to:
const expressServer = express()
... - change
.listen
toif (dev) { expressServer.listen
...}
export { expressServer }
index.js
:
const {expressServer} = require('./__sapper__/build/server/server')
exports.ssr = functions.https.onRequest(expressServer);
npm run build
npm run dev
localhost:3000 will show the Firebase default index.html
from static
folder, which can be deleted.
Page reload will bring up Sapper project.
firebase.json
:
"rewrites": [ { "source": "**", "function": "ssr" }]
npm run build
firebase deploy
Visit app and click around, refresh to verify functionality.
Try Postman, send a GET to your project URL. In output, look for confirmation that content is SSR.
package.json
:
"deploy": "npm run build && firebase deploy"
Nav.svelte
:
- add a new
li
to the navbar, for a new page
routes
:
- create new
.svelte
page, and add some quick HTML content
npm run deploy
Verify new content shows. Run audit from Chrome dev tools.
<-- snip..
Angular, bootstrap, firebase
ngx-bootstrap or ng-bootstrap? ng-bootstrap Animation capability is an issue
Should we use material2 instead of bootstrap? some people say matrial2 grid has limited functionality, bootstrap grid system is reasier to use.
articles
- Building An Angular 5 Project with Bootstrap 4 and Firebase
- Material Design vs Bootstrap: Which One is Better?
- Getting started with angular-material2
- angular-material-2 tutorial
web based code editor/text editor
- codemirror, source in github
Flutter Apps
Setup
-
Install git
-
Install npm
-
Install flutter sdk
-
Read carefully the firebase CLI docs. Install firebase CLI using npm.
npm install -g firebase-tools
-
log into firebase using the firebase CLI.
firebase login # this will test the successful login # by listing all your projects firebase projects:list
-
Read carefully flutter web doc. Change channel to dev or beta. Currently I am using dev channel to the latest features.
flutter channel dev flutter upgrade flutter config --enable-web
-
Set powershell script policy by running in an admin powershell for Windows machine. Otherwise firebase commands will not run.
Set-ExecutionPolicy RemoteSigned
Create a flutter project
Type following flutter cli command in shell to create a starter flutter project.
flutter create <app_name>
cd <app_name>
This creates a folder named '<app_name>
' in the current working directory. Next we change working directory to newly created '<app_name>
' folder.
Android app, IOS app, and web app target support will be added to the project by the cli command.
Add git and setup for gitlab
git init
git add .
git commit -m "initial commit"
git push --set-upstream https://gitlab.com/kkibria/<app_name>.git master
git remote add origin https://gitlab.com/kkibria/<app_name>.git
Add firebase to tha flutter project
Create a firebase project
- Go to firebase console.
- Create a new firebase project in firebase console with the
<app_name>
as the project name. - In the project
Setting > General
tab select Google Cloud Platform (GCP) resource location. - Select the
Database
tab. Configure the firestore database into Native mode.
Add the firebase SDK support libraries
Add firebase dart libraries to the dependencies
section of pubspec.yaml
file.
...
dependencies:
...
# flutter firebase SDK libraries
# comment out the ones you don't need
firebase_auth: ^0.15.4
firebase_messaging: ^6.0.9
firebase_database: ^3.1.1
cloud_firestore: ^0.13.2+2
firebase_storage: ^3.1.1
...
Configure Webapp with PWA
PWA support was already added for web platform by flutter create
command. We need to connect flutter web target with a firebase web app.
- Add an web app to the firebase project.
- Add a nickname for the
<app_name>_web
. - Click on firebase hosting option.
- Now click on Register button.
- It will show a javascript snippet that will show how to add firebase javascript SDK to
web/index.html
. For now we wont add the snippet. We will do it later.
Connect the flutter web target with firebase webapp.
Run following firebase CLI command from inside <app_name> directory.
firebase init
Select either Realtime Database
or Firestore
, or both as necessary. Both can be used if there is a need but probably not common. Check the rest of the options as necessary as well. Hit enter.
Select Exiting project
and hit enter. Then select the firebase project you just created.
Note: selecting firestore is giving index trouble, so I selected Realtime.
Select all defaults except for the public directory, type build/web
.
Android app from flutter
Todo......
IOS app from flutter
Todo.....
Web app from flutter
We have to configure the web template file. When we build the web app, the web template file gets copied over to the build/web
folder.
Update the flutter web template
firebase init
will build an index.html
file in build/web
directory. You will see the firebase javascript SDK snippet we saw earlier, is already included in this index.html
.
However, every time flutter will build our web app this file will be overridden from a template file.
Therefore, copy the firebase relevant portion in this file to the web template web/index.html
file to update the template. Next time we build the web target with flutter build web
command the javascript SDK snippet will persist.
The template will end up looking something like the following,
<head>
...
<title>my awesome pwa app</title>
<link rel="manifest" href="/manifest.json">
...
<!-- update the version number as needed -->
<script defer src="/__/firebase/7.8.2/firebase-app.js"></script>
<!-- include only the Firebase features as you need -->
<!-- comment out the ones you don't need -->
<script defer src="/__/firebase/7.8.2/firebase-auth.js"></script>
<script defer src="/__/firebase/7.8.2/firebase-database.js"></script>
<script defer src="/__/firebase/7.8.2/firebase-firestore.js"></script>
<script defer src="/__/firebase/7.8.2/firebase-messaging.js"></script>
<script defer src="/__/firebase/7.8.2/firebase-storage.js"></script>
<!-- initialize the SDK after all desired features are loaded -->
<script defer src="/__/firebase/init.js"></script>
...
</head>
Building the web app and host it in firebase server.
flutter build web
.firebase serve
firebase deploy
... to be continued
Flutter Sign-in for your users
Google sign in
- Firebase Google sign in
Email link sign in
- Article Flutter: How to implement Password-less login with Firebase.
- Article Flutter : Firebase Dynamic Link.
New notes
- https://proandroiddev.com/flutter-passwordless-authentication-a-guide-for-phone-email-login-6759252f4e
- https://medium.com/@ayushsahu_52982/passwordless-login-with-firebase-flutter-f0819209677
- https://medium.com/@levimatheri/flutter-email-verification-and-password-reset-db2eed893d1d
- For WebApps email link login needs to be handled somewhat differently than regular ios or android app, https://firebase.google.com/docs/auth/web/email-link-auth
It is also important to note that email verification and password reset links through email will require similar approach in flutter WebApps. We have to figure out how to handle those in the same dart code for all platforms.
Flutter matters
Flutter Text and rendering features
- flutter-text-rendering
- framework.dart
- text.dart
- basic.dart
- Examples of Flutter's layered architecture
Medium articles
YouTube videos
Firebase Auth sample
Firebase Auth articles
- Flutter Password-less Authentication — a guide for phone & email login
- Flutter: How to implement Password-less login with Firebase
- Flutter: Email verification and password reset
Articles on rendering
- The Engine architecture
- Flutter’s Rendering Engine: A Tutorial
- Everything you need to know about tree data structures
- Android’s Font Renderer
Flutter RenderObject / painting
Flutter UI design tools
- flutter IDE
- flutter studio, no source code! he has two apps, https://flutterstudio.app and https://devicedb.app/.
- another flutter studio project, has source code but broken code at the moment.
Dart serialization
Page Routing
- Flutter web routing with parameters
dart code generation
- [Part 1] Code generation in Dart: the basics
- [Part 2] Code generation in Dart: Annotations, source_gen and build_runner
- todo_reporter.dart in github
Flutter markdown editing
We can use this editor as a basis for markdown editing
- Soft and gentle rich text editing
- Markdown Editor With Flutter. This is incomplete, but at least the idea is there for us to examine.
Built_Value library
- Dart’s built_value for Immutable Object Models
- Dart’s built_collection for Immutable Collections
- Introduction To Built_Value Library In Dart
Textual Contents
Textual content authoring in html is quite tedious and alternative authoring is preferred for web sites. As such, markdown format has gathered popularity and is being used quite widely. This format is really just a simple text file that can be produced by any text editor.
We will look into Content Management with markdown for web sites. These sites can contain a collection of markdown files. Content Management allows them to be organized and navigated n a useful way. They are transformed to html before they are rendered in a browser.
However, wherever you will host the content, their flow is something important to understand clearly before you start. For instance, we can talk about github pages later.
VS code to edit markdown files
Install Rewrap vscode plugin. <Alt+Q> will hard wrap and unwrap your content. Makes life lot easier when you are copy pasting excerpts from somewhere else that comes aa long line of text. Also helps writing comments in code by wrapping long comments. Read it's documentation.
html to markdown conversion
We may need to convert existing html to markdown.
html2md
is a tool
written in golang. Works nice!
Hosting Content in github pages
Github pages is quite popular for hosting as it is free and git controlled.
Although there are several options available for github pages, there is a idea
of separation of concern behind those options. They implemented the idea by
using two different git branches. One for textual content like markdown source
files and the other for generated html. Usually, the markdown source file lives
in the default branch master
as we may edit those files more frequently. When
we are ready to deploy our content, the generated html will live in another
branch, lets call it gh-pages
. github allows configuring the repository such
that it's web server will check out html from gh-pages
branch and use.
Scaffolding setup
Knowing this, we will need to scaffold our project in way so that it is convenient to manage. As such, I will suggest one way that felt most convenient for me.
my_project (folder)
+-- .git (points to master branch)
+-- .gitignore (set to ignore gh-pages folder)
+-- (markdown contents)
+-- deploy.sh (used for deploying generated content)
+-- gh-pages (folder)
+-- .git (points to gh-pages branch)
+-- (generated html contents)
Using terminal, create a project folder and open it in vscode,
mkdir my_project
cd my_project
echo "gh-pages" > .gitignore
mkdir gh-pages
# This will create the master branch
git init
git add .
git commit -m "first commit"
code .
Create repository in github and publish the default branch
With vscode, in source control panel, create a github repository and push everything we have so far to github.
If you are not using vscode, then you can manually create github repository and
push the master
branch.
Create and publish the html branch
Now we create the gh-pages
branch.
Following bash script will do this,
URL=`git remote get-url origin`
pushd gh-pages
echo "gh-pages site" > index.html
git init
git remote add origin $URL
# create gh-pages branch
git checkout -b gh-pages
git add .
git commit -m "initial commit"
# push the branch to github
git push origin gh-pages
popd
Now the scaffolding is ready. We need to create a script that will deploy generated content.
Setup a deploy script
deploy.sh
command_to_generate_html
pushd gh-pages
git add .
git commit -m "deploy"
git push origin gh-pages
popd
If you are in windows, you can create equivalent powershell or command line scripts.
Setup github to pick up the branch and folder
In github repo settings go to github pages
and setup branch to gh-pages
and
change folder to docs
and note the public url they provide for the site.
Create and publish the html branch (powershell version)
$url = git remote get-url origin
mkdir gh-pages
Set-Location gh-pages
echo "gh-pages site" > index.html
git init
git remote add origin $url
git checkout -b gh-pages
git add .
git commit -m "initial commit"
git push origin gh-pages
Set-Location ..
Setup a deploy script (powershell version)
deploy.ps1
command_to_generate_html
Set-Location gh-pages
git add .
git commit -m "deploy"
git push origin gh-pages
Set-Location ..
Mdbook
To do a new book, first you need to install rust
.
Install mdbook and the front matter preprocessor
With this preprocessor we can support a subset of Yaml in front matter.
cargo install mdbook
cargo install --git https://github.com/kkibria/mdbook-frntmtr.git
Hosting at github pages
For static sites, github pages is a great choice. It is free and easy to use. Generate the scaffolding using instructions from Hosting Content in github pages section.
Initialize Project folder
cd my_project
mdbook init
mkdir src
Setup book.toml
add following,
[book]
src = "src"
[build]
build-dir = "gh-pages/docs"
[preprocessor.frntmtr]
command = "mdbook-frntmtr"
Add content
Start the local server for development,
mdbook serve -o
Now modify or add contents in src
folder. It will live update. Once you are happy with the
content you can deploy.
Deploy to github pages (powershell version)
deploy.ps1
mdbook build
Set-Location gh-pages
git add .
git commit -m "deploy"
git push origin gh-pages
Set-Location ..
Now the site will be served by github pages.
Hugo
tutorial
How to build Hugo Theme
Adding tailwind
Install Hugo
First install google golang from their website appropriate to your computer.
Then build hugo
from source code using their github repo.
mkdir $HOME/src
cd $HOME/src
git clone https://github.com/gohugoio/hugo.git
cd hugo
go install --tags extended
If you are a Windows user, substitute the
$HOME
environment variable above with%USERPROFILE%
.
Create a site for github pages
Go to github and create a repo. Get the git https clone URL.
Now create a hugo directory,
mkdir <repo_name>
cd my repo
hugo new site .
git remote add origin <repo_clone_url>
git push -u origin master
This will create the scaffolding for hugo. Now we will get a theme. Go to the themes github page and get the clone url.
cd themes
git clone <theme_clone_url>
cd ..
This wll create a directory with same name as the theme. Now copy the theme config to the our config.
cp themes/<theme_name>/exampleSite/config.toml .
Now edit the config.toml file and delete the themesdir
as appropriate.
At this point you can add content, and do your own theme.
Need to play with themes.
Render Math
Integrate slides capability
Integrate PDF generation
Jekyll
Jekyll filter
Filters are to be saved in _plugins
directory.
module Jekyll
module BibFilter
REGEXP = /\bjournal\b[\w\s= \{\-\.\,\(\)\-\:\+\'\/\..]+\},?/
def bibFilter(bib)
#filter text using regexp
str = "#{bib}".gsub(REGEXP,'')
#print filtered text
"#{str}"
end
end
end
Liquid::Template.register_filter(Jekyll::BibFilter)
Beyond Textual Contents
Javascript libraries that can be generate content
Need to explore the followings,
- For math expression https://github.com/KaTeX/KaTeX is a good library to provide client side rendering.
- For flowcharts https://flowchart.js.org also seems like a great library.
- For building graphs or networks, https://js.cytoscape.org
- 10+ JavaScript libraries to draw your own diagrams https://modeling-languages.com/javascript-drawing-libraries-diagrams.
- https://svgjs.com/docs/3.0 svg drawing library that provides animation.
- https://mermaid-js.github.io support flowcharts.
- https://ivanceras.github.io/content/Svgbob ascii diagram to svg.
Gitbook
Community portal
building a multi-community portal
There are lot of heavy duty social networking sites around us. For a learning exercise, lets say we are building a social site that hosts multiple communities. This site will have many registered users. Any of the user can start a community and he will be the owner of the community. The community will have a portal and owner can configure the look and feel of the portal. Other users can join the community and some of them will be given the right to author in the portal. How would we structure our data for such a site?
NOSQL databse
Lets assume the data will be saved in a NOSQL databse.
The structure will look like,
users:
userid:
To be continued...
Go language
vscode powershell setup
The powershell does not have the environment variables setup for go
when it starts.
First setup the powershell startup as shown in
Powershell setup.
In your go project directory create .psrc.ps1
and put the following to get the
environment variables setup.
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
$env:GOPATH = [Environment]::GetEnvironmentVariable("GOPATH","User")
Alternatively you can setup your custom go binary path and GOPATH
in this file as well.
Server
desktop
- https://github.com/zserge/lorca
- https://youtu.be/p_7MfQZTy34
- https://github.com/webview/webview
- https://github.com/wailsapp/wails
- https://youtu.be/Dg9rUXxNV-c
- https://github.com/asticode/go-astilectron
TODO: So far, go-astilectron seems most promising for serious development. But, astilectron is using tcp connection for ipc. Instead path based connection will much be much faster as long as both endpoints are within the same host which is the most typical case. Under the hood path based connection uses unix domain socket or windows named pipe depending on the o/s. As such the go side need to adjust for path based connection.
using dbus
python to go
Java development
Maven
Maven is a great build tool, running maven build will compile, do all sorts of tasks and create the project target.
Maven build will store target jar and download all the dependencies (jars, plugin jars, other artifacts) for later use in the local repository. Maven supports 3 types of repository for dependencies:
- Local – Repository on local Dev machine.
- Central – Repository provided by Maven community.
- Remote – Organization owned custom repository.
The Local Repository
Usually this is a folder named .m2
in users home directory.
The default path to this folder,
- Windows:
C:\Users\<User_Name>\.m2
i.e.%UserProfile%\.m2
. - Linux:
/home/<User_Name>/.m2
i.e.~/.m2
. - Mac:
/Users/<user_name>/.m2
i.e.~/.m2
.
maven config file can change the default. The config file is located at the following path
<Maven-install-Path>/conf/settings.xml
.
Reading materials:
Javascript library
Using require like a node package in browser
Lets take an example where we will use node package in a commonjs
javascript file. This kind of setup would work in a node environment without
problem. But in a browser using require
would normally be a problem. We can use the following method to make it work in browser.
In this example we will instantiate jquery with require
as a node module. First get jquery module using npm. Then make test.js
module as following.
At the end of the file we will export the API.
var $ = require("jquery");
function getName () {
return 'Jim';
};
function getLocation () {
return 'Munich';
};
function injectEl() {
// wait till document is ready
$(function() {
$("<h1>This text was injected using jquery</h1>").appendTo(".inject-dev");
});
}
const dob = '12.01.1982';
module.exports.getName = getName;
module.exports.getLocation = getLocation;
module.exports.dob = dob;
module.exports.injectEl = injectEl;
Now wrap everything up in a single module. You have two options,
- Use
browserify
. - Use
rollup
.
Use browserify
Assuming you have already installed browserify
and js-beautify
, run them. If node builtins are used your commonjs
file, browserify --s
option will include them.
browserify test.js --s test -o gen-test.js
#optional only if you like to examine the generated file.
js-beautify gen-test.js -o pretty-gen-test.js
Check the outputs, you can see jquery has already been included in the output.
Now we can load gen-test.js
in the browser in an html file. It also works with svelte. Following shows using it in a svelte source.
<script>
import { onMount } from 'svelte';
import test from "./gen-test.js"
let name = test.getName();
let location = test.getLocation();
let dob = test.dob;
test.injectEl();
</script>
<main>
<h1>Hello {name}!</h1>
<p>Location: {location}, dob: {dob}</p>
<div class="inject-dev"></div>
</main>
I have built this as an npm project in github with svelte template.
Use rollup
If you have installed rollup
this can also be done with added benefit of tree shaking. rollup.config.js
can be configured as,
import resolve from '@rollup/plugin-node-resolve';
import commonjs from '@rollup/plugin-commonjs';
import json from '@rollup/plugin-json';
export default {
...
plugins: [
resolve({
browser: true,
// only if you need rollup to look inside
// node_modules for builtins
preferBuiltins: false,
dedupe: ['svelte']
}),
json(),
commonjs()
]
}
node builtins
Read node-resolve documentation carefully.
If node builtins are used your commonjs
file, they will be missing. You have two options,
- Import the individual builtin packages with npm, if you have only a few missing. Set
preferBuiltins
tofalse
so that rollup can get them fromnode_modules
. - All node builtins can be included using npm packages
rollup-plugin-node-builtins
androllup-plugin-node-globals
with rollup. SetpreferBuiltins
totrue
so that rollup will use builtins from these instead. You can removepreferBuiltins
altogether since
it default value istrue
anyways.
Create javascript for older browsers
The way to fix language version problems is transpiling the javascript code using babel. To make sure transpiled code works on older browser, we have to test it on different browsers to see if and why it fails. Cross browser testing sites like https://www.lambdatest.com or https://saucelabs.com/ are helpful but can be expansive depending on situation. Check the following to get an insight,
- babel Handbook.
- Browser support for javascript, ECMAScript compatibility table.
- Youtube video, Do you really need BABEL to compile JavaScript?
- Babel under the hood.
babel with rollup
Read rollup babel plugin documentation carefully to understand how to configure the plugin. This plugin will invoke babel. Next we need to understand how to configure babel, read Babel configuration documentation.
babel with webpack
- Dillinger is a good example, Github source. I tried their site on IE11 and it worked fine.
- Good info on using babel in the Github source readme of the webpack plugin
babel-loader
. - Support IE 11 Using Babel and Webpack
Polyfill articles
- Polyfills: everything you ever wanted to know, or maybe a bit less.
- Loading Polyfills Only When Needed.
File io from browsers
Tree shaking with rollup
- Optimizing JavaScript packages for tree shaking.
- How to bundle a npm package with TypeScript and Rollup.
TypeScript
We can automatically compile typescript files by running typescript compiler in watch mode,
tsc *.ts --watch
Check out more details on tsconfig.json
usage.
Best way to provide module API
I like using Javascript class to provide prototypical APIs that uses states. APIs without states can be exported as functions.
js asynchronous programming: Promise(), async/await explained
Traditionally, we would access the results of asynchronous code through the use of callbacks.
let myfunc = (maybeAnID) => {
someDatabaseThing(maybeAnID, function(error, result)) {
if(error) {
// build an error object and return
// or just process the error and log and return nothing
return doSomethingWithTheError(error);
} else {
// process the result, build a return object and return
// or just process result and return nothing
return doSomethingWithResult(result);
}
}
}
The use of callbacks is ok until they become overly nested. In other words, you have to run more asynchronous code with each new result. This pattern of callbacks within callbacks can lead to something known as callback hell.
To solve this we can use promise.
A promise is simply an object that we create like the later example. We instantiate it with the new
keyword. Instead of the three parameters we passed in to make our car (color, type, and doors), we pass in a function that takes two arguments: resolve
and reject
.
let myfunc = (maybeAnID) => new Promise((resolve, reject) => {
someDatabaseThing(maybeAnID, function(error, result)) {
//...Once we get back the thing from the database...
if(error) {
reject(doSomethingWithTheError(error));
} else {
resolve(doSomethingWithResult(result));
}
}
}
The call to asynchronous function is simply wrapped in a promise that is returned which allows
chaining with .then()
which is much more readable then the callback hell style coding.
Or we can have our own function that returns a promise without wrapping a callback API.
let myfunc = (param) => new Promise((resolve, reject) => {
// Do stuff that takes time using param
...
if(error) {
reject(doSomethingWithTheError(error));
} else {
resolve(doSomethingWithResult(result));
}
}
The functions that consume a function returning a promise can use .then()
chaining. However there is yet another cleaner alternative.
In a function we can use await
to invoke a function that returns a promise.
This will make the execution wait till the promise is resolved or rejected.
We don't need to use .then()
chaining, which improves readability further more.
A function that wants to use await
must be declared with async
keyword.
More details here,
- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function
- https://levelup.gitconnected.com/async-await-vs-promises-4fe98d11038f
- https://www.freecodecamp.org/news/how-to-write-a-javascript-promise-4ed8d44292b8/
- https://medium.com/@sebelga/simplify-your-code-adding-hooks-to-your-promises-9e1483662dfa
async
function always returns a promise.
from a regular function they can be called with .then()
let v = new Promise((resolve, reject) => {
// throw Error("Returning error");
resolve(20);
});
async function abc() {
// test rejected promise
// r will be undefined in case there is an error
let r = await v.catch(() => {});
console.log(r);
return 35;
}
async function pqr() {
console.log("running abc");
let p = await abc();
console.log(p);
}
// unused return value which is a promise
// This is perfectly acceptable if
// there was no result to return
pqr();
// this is how we retrieve return value from
// an async function from a regular function
// or top level as we can't use await
// however module top level can be made async
abc().then(val => console.log(val));
clipboard API
modern web browsers provide api for clipboard access
tree implementation in node
const dirTree = require("directory-tree");
function printtree(indent, tree, last) {
let myindent = last ? "'": "|";
console.log(`${indent}${myindent}-- ${tree.name}`);
if ("children" in tree) {
for (let i=0; i < tree.children.length; i++) {
let chindent = last ? " ": "|";
let chlast = i == (tree.children.length-1);
printtree(indent+chindent+" ", tree.children[i], chlast);
}
}
}
function tree(path) {
let dt = dirTree(path);
printtree("", dt, false);
}
Rust language
Learning rust
- The rust book. Expand the TOC by pressing the menu icon on the top left of the page.
- The Rust Lang Book. I like this video series, watch along with the rust book.
- Rust: A Language for the Next 40 Years - Carol Nichols.
- Rust Out Your C by Carol. The Slides.
- Stanford Seminar The Rust Programming Language - The Best Documentary Ever
- Traits and You: A Deep Dive — Nell Shamrell-Harrington.
- Let's Learn Rust: Structs and Traits
- https://tourofrust.com
- An excellent article https://fasterthanli.me/articles/a-half-hour-to-learn-rust
rust libraries
desktop app with rust
- https://tauri.studio Most practical application pattern is what they call lockdown pattern (event api) added with rust command api. Uses webview2 for windows.
- https://tauri.studio/en/docs/guides/command
- https://tauri.studio/en/docs/guides/events
- GUI https://github.com/vizia/vizia
creating books
Using rust in Raspberry pi
- How to Get Started With Rust on Raspberry Pi
- Program the real world using Rust on Raspberry Pi
- Cross compiling Rust for Raspberry Pi
- Cross Compiling Rust for the Raspberry Pi
- Anyone using Rust on a PI?
- Learn to write an embedded OS in Rust, github, tutorials.
- Prebuilt Windows Toolchain for Raspberry Pi. Question: who are these people? Where are the sources for these tools?
- Cross compiling Rust for ARM (e.g. Raspberry Pi) using any OS!
- “Zero setup” cross compilation and “cross testing” of Rust crates
- Vagrant, Virtual machine for cross development. I really like this setup, easy to use. Plays well with virtualbox.
- https://github.com/kunerd/clerk/wiki/How-to-use-HD44780-LCD-from-Rust#setting-up-the-cross-toolchain
- https://opensource.com/article/19/3/physical-computing-rust-raspberry-pi
- https://github.com/japaric/rust-cross
rust GPIO for pi
- May be a kernel module with rust?? Some work is ongoing.
- RPPAL.
- https://github.com/rust-embedded/rust-sysfs-gpio.
Most promising seems to be RPPAL option.
I will try this option and do the write up on this.
Cross compiling rust on ubuntu
Compiling rust on pi will take for ever, cross compiling will save development time. We will use ubuntu for cross compile.
If we are on a windows machine, WSL2 also is a good way to develop for raspberry. check WSL 2: Getting started. Go ahead install ubuntu to run with WSL2.
Primary problem with cross compiling rust for pi zero is that zero is armv6 but other pis are armv7. At the time of this writing, gcc toolchain only has support for armv7. armv6 compile also produces armv7 image. So the toolchain needs to be installed from pi official tool repo from github which has armv6 support. See more in the following links,
- https://github.com/rust-embedded/cross/issues/426
- https://github.com/japaric/rust-cross/issues/42
- https://hub.docker.com/r/mdirkse/rust_armv6
Using this strategy we will go ahead and setup wsl2 linux detailed in Rust in Raspberry Pi.
QEMU for library dependencies
- Debootstrap
- Introduction to qemu-debootstrap.
- https://headmelted.com/using-qemu-to-produce-debian-filesystems-for-multiple-architectures-280df41d28eb.
- Kernel Recipes 2015 - Speed up your kernel development cycle with QEMU - Stefan Hajnoczi.
- Debootstrap #1 Creating a Filesystem for Debian install Linux tutorial.
- Creating Ubuntu and Debian container base images, the old and simple way.
- Raspberry Pi Emulator for Windows 10 Full Setup Tutorial and Speed Optimization.
- RASPBERRY PI ON QEMU.
Linux kernel module with rust
rust-wasm
- https://rustwasm.github.io/
- book
- Rust in the Browser for JavaScripters: New Frontiers, New Possibilities
java to rust
python to rust
using dbus in rust
- https://github.com/diwic/dbus-rs dbus crate.
- https://github.com/diwic/dbus-rs/issues/214 Simple dbus-codegen example.
- https://github.com/deifactor/ninomiya
- https://github.com/diwic/dbus-rs/blob/master/dbus-codegen/examples/adv_server_codegen.rs server example.
- https://github.com/diwic/dbus-rs/blob/master/dbus/examples/match_signal.rs client example using dbus-codegen-rust.
- https://github.com/kkibria/rustdbuscross
pi dbus
$ dpkg -l | grep dbus
ii dbus 1.12.16-1 armhf simple interprocess messaging system (daemon and utilities)
ii libdbus-1-3:armhf 1.12.16-1 armhf simple interprocess messaging system (library)
ii libdbus-1-dev:armhf 1.12.16-1 armhf simple interprocess messaging system (development headers)
ii python-dbus 1.2.8-3 armhf simple interprocess messaging system (Python interface)
ii python3-dbus 1.2.8-3 armhf simple interprocess messaging system (Python 3 interface)
install dbus-codegen-rust
following will install dbus-codegen-rust CLI.
cargo install dbus-codegen
There are two possibilities
- Write server and client.
- Write client for an exiting installed server.
Client for an exiting server
example of generated code,
dbus-codegen-rust -s -d org.freedesktop.timedate1 -p /org/freedesktop/timedate1 -o src/timedate.rs -i org.freedesktop
which will put the code in src
folder.
cross compile dbus
- https://github.com/diwic/dbus-rs/blob/master/libdbus-sys/cross_compile.md
- https://serverfault.com/questions/892465/starting-systemd-services-sharing-a-session-d-bus-on-headless-system headless dbus.
- https://raspberrypi.stackexchange.com/questions/114739/how-to-install-pi-libraries-to-cross-compile-for-pi-zero-in-wsl2.
The following script downloads and cross-compiles D-Bus and Expat for Raspberry Pi zero:
#!/usr/bin/env bash
set -ex
# Clone the D-bus and Expat libraries
[ -d dbus ] || \
git clone --branch dbus-1.13.18 --single-branch --depth=1 \
https://gitlab.freedesktop.org/dbus/dbus.git
[ -d libexpat ] || \
git clone --branch R_2_2_9 --single-branch --depth=1 \
https://github.com/libexpat/libexpat.git
# Script for building these libraries:
cat << 'EOF' > build-script-docker.sh
#!/usr/bin/env bash
set -ex
cd "$(dirname "${BASH_SOURCE[0]}")"
# Point pkg-config to the sysroot:
. cross-pkg-config
# Directory to install the packages to:
export RPI_STAGING="$PWD/staging"
rm -rf "${RPI_STAGING}"
# libexpat
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pushd libexpat/expat
./buildconf.sh
mkdir -p build
pushd build
../configure \
--prefix="/usr/local" \
--host="${HOST_TRIPLE}" \
--with-sysroot="${RPI_SYSROOT}"
make -j$(nproc)
make install DESTDIR="${RPI_SYSROOT}"
make install DESTDIR="${RPI_STAGING}"
popd
popd
# dbus
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pushd dbus
mkdir -p build
pushd build
cmake .. \
-DCMAKE_TOOLCHAIN_FILE="$HOME/${HOST_TRIPLE}.cmake" \
-DCMAKE_BUILD_TYPE="Release" \
-DCMAKE_INSTALL_PREFIX="/usr/local"
make -j$(nproc)
make install DESTDIR="${RPI_SYSROOT}"
make install DESTDIR="${RPI_STAGING}"
popd
popd
EOF
# Start the Docker container with the toolchain and run the build script:
image="tttapa/rpi-cross:armv6-rpi-linux-gnueabihf-dev"
docker run --rm -it -v "$PWD:/tmp/workdir" $image \
bash "/tmp/workdir/build-script-docker.sh"
You'll need to have Docker installed. When finished, the libraries will be in the staging
folder in the working directory.
The Docker container with the toolchain is one I maintain (https://github.com/tttapa/RPi-Cpp-Toolchain), but the installation process should be similar with the toolchain you're using, you'll just have to install some extra dependencies such as make, autotools, and maybe cross-compile some other dependencies of Expat and D-Bus as well.
I also maintain some notes with instructions of the toolchains and cross-compilation processes, which you might find useful: https://tttapa.github.io/Pages/Raspberry-Pi/C++-Development/index.html
You might want to add some extra options to the configure and cmake steps, but that's outside of the scope of this answer, see the relevant D-Bus documentation.
Also note that installs both libraries to both the sysroot and the staging area, it'll depend on what you want to do with it. You have to install at least libexpat
to the ${RPI_SYSROOT}
folder, because that's the folder used as the sysroot for compiling dbus
which depends on libexpat
. The sysroot folder for the compilation of dbus
is selected in the CMake Toolchain file, ~/${HOST_TRIPLE}.cmake
, it's included with the Docker container. Its contents are:
SET(CMAKE_SYSTEM_NAME Linux)
SET(CMAKE_C_COMPILER armv6-rpi-linux-gnueabihf-gcc)
SET(CMAKE_CXX_COMPILER armv6-rpi-linux-gnueabihf-g++)
SET(CMAKE_SYSTEM_PROCESSOR armv6)
set(CMAKE_SYSROOT $ENV{RPI_SYSROOT})
SET(CMAKE_FIND_ROOT_PATH ${CMAKE_SYSROOT})
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
You might also have to point pkg-config
to the right sysroot folder. This is handled by the cross-pkg-config
script:
export PKG_CONFIG_LIBDIR="${RPI_SYSROOT}/usr/local/lib:${RPI_SYSROOT}/opt/vc/lib"
export PKG_CONFIG_PATH="${RPI_SYSROOT}/usr/local/lib/pkgconfig:${RPI_SYSROOT}/usr/local/share/pkgconfig:${RPI_SYSROOT}/opt/vc/lib/pkgconfig"
export PKG_CONFIG_SYSROOT_DIR="${RPI_SYSROOT}"
Rust Qt binding
using rust with vscode in windows
If you are using powershell in vscode, the path might not pickup rust compiler. Read Powershell setup for vscode for more information.
Place .psrc.ps1
file at the root of the project folder with following, which is the default
path of rust install.
$env:Path += ";$profile/.cargo/bin"
If you installed rust to a custom path, use that path instead.
Videos to watch:
debugging rust with vscode in windows
Server
- https://crates.io/crates/rust-embed embeds static file in the server binary.
- https://crates.io/crates/live-server live reload enabled server. Embeds livereload websocket code into html files on the fly when serving.
- https://crates.io/crates/tide a web server with support for middleware.
creative content framework
Ide design
rust drawing app
Rust setup in wsl
rust with vscode
I installed the VS Code extension,
- rust-analyzer (for IDE support)
It provides lot of helpers, including
refactoring that makes it easy to write code. This also puts a command in
command pallet (ctrl+shift+p) called rust analyzer: debug
which starts the
debug, without setting vscode debug configuration which is nice!
But then it started becoming frustrating.
rust in wsl
Why do we need to even use wsl to build rust. It has to do with symbolic debug
with vscode. In windows the default is MSVC-style debugging using cppvsdbg
for debugging. Unfortunately with this many symbols were reported as "gotten
optimized away". Even when cargo.toml
contains,
[profile.dev]
opt-level = 0 # no optimizations
debug = true # emit debug info (level 2 by default)
So, the common recommendation I found was to use lldb
for debug. So I
installed the VS Code extension,
- CodeLLDB (LLDB frontend)
Under windows, LLDB did give symbols that were missing before, but lldb was running so slow that it was almost unusable. When I tried in wsl under vscode, things were running very nicely.
Cross compile for windows msvc in wsl
But I wanted windows binary. Rust toolchain can compile for windows msvc
target, but I needed to use msvc
linker which lives in windows side. In order
to use the linker in the cargo flow, I followed instructions from
https://github.com/strickczq/msvc-wsl-rust for setting up the linker. In my
case I use vscode. So I downloaded Build Tools for Visual Studio and then
installed C++ Build Tools
workload.
This setup works great.
Rust Plugins
- https://adventures.michaelfbryan.com/posts/plugins-in-rust/
- https://github.com/Michael-F-Bryan/plugins_in_rust
Rust Multithreading
Rust's ownership and borrow checking at compile time makes it easy to use threads. However sharing data between threads requires some consideration.
Sharing data
Let's say we have two threads. One for gui, another for processing. We need to share a big data structure which is modified in gui thread. But they are accessed in the processing thread which could perform lengthy processing. How do we achieve this?
Accessing shared data between threads can be tricky, especially when the data is large and frequently modified. In such Read-copy-update can be considered.
Read copy update
Sharing data with read copy update (RCU) is a technique used in concurrent programming to allow multiple threads or processes to access a shared data structure simultaneously without the need for explicit locking. The RCU technique is commonly used in high-performance computing environments where lock contention can be a significant bottleneck.
The basic idea behind RCU is to maintain multiple versions of the shared data structure simultaneously, with each version accessible by a different thread or process. When a thread wants to read the shared data, it simply accesses the current version. When a thread wants to modify the shared data, it creates a new version of the data structure, modifies it, and then updates a global pointer to indicate that the new version is now the current version.
The RCU technique provides fast read access because readers do not need to acquire locks or wait for other threads to release locks. Instead, they simply access the current version of the shared data. The write operations are serialized using some other synchronization mechanism such as atomic operations or locks, but the read operations are not blocked by these write operations.
In the read-copy-update technique, a process or thread requesting to modify the shared data structure can create a copy of the data structure and work on it in isolation. Other threads that are still using the old version of the data structure can continue to use it without locking or blocking. The updated data structure is made available only when the current users are no longer using the old data structure. This process of sharing old data and allowing read-only access to it while a copy is modified is called copy-on-write.
RCU is particularly useful for shared data structures that are read frequently but updated infrequently, or where lock contention is a bottleneck. However, it requires careful design and implementation to ensure that the different versions of the shared data are correctly managed and that updates to the data structure do not result in inconsistencies or race conditions.
Rust data sharing
In Rust we can share by using,
Arc
(atomic reference counting) andMutex
(mutual exclusion) types.- Message passing.
- Combination of both.
Following cases are not exhaustive use cases but shows some common uses.
Using Mutex
Wrap your data structure in an Arc<Mutex<T>>
. This will allow multiple threads
to share the data structure and access it safely.
#![allow(unused)] fn main() { use std::sync::{Arc, Mutex}; // Define your data structure. struct MyDataStructure { // ... } // Wrap it in an Arc<Mutex<T>>. let shared_data = Arc::new(Mutex::new(MyDataStructure { /* ... */ })); }
In the gui thread, when you need to modify the data structure, you can acquire a
lock on the Mutex
using the lock()
method. This will give you a mutable
reference to the data structure that you can modify.
#![allow(unused)] fn main() { let mut data = shared_data.lock().unwrap(); // Modify the data structure as needed. data.modify_something(); }
In the processing thread, when you need to access the data structure, you can
also acquire a lock on the Mutex
using the lock()
method. This will give you
an immutable reference to the data structure that you can safely access.
#![allow(unused)] fn main() { let data = shared_data.lock().unwrap(); // Access the data structure as needed. let value = data.get_something(); }
Note that calling lock()
on a Mutex
can block if another thread has already
acquired the lock. To avoid deadlocks, be sure to acquire locks on the Mutex
in a consistent order across all threads that access it.
Also, keep in mind that accessing shared data across threads can have performance implications, especially if the data structure is large and frequently modified. You may want to consider other strategies such as message passing to minimize the need for shared mutable state.
Message passing
Using message passing can be a good way to minimize the need for shared mutable state, especially for large data structures. Instead of sharing the data structure directly, you can send messages between threads to communicate changes to the data.
Here's an example of how you could use message passing to modify a large data structure between two threads:
Define your data structure and a message type that can be used to modify it.
#![allow(unused)] fn main() { // Define your data structure. struct MyDataStructure { // ... } // Define a message type that can modify the data structure. enum Message { ModifyDataStructure(Box<dyn FnOnce(&mut MyDataStructure) + Send + 'static>), } }
Create a channel for sending messages between the gui and processing threads.
#![allow(unused)] fn main() { use std::sync::mpsc::{channel, Sender, Receiver}; // Create a channel for sending messages between threads. let (sender, receiver): (Sender<Message>, Receiver<Message>) = channel(); }
In the gui thread, when you need to modify the data structure, create a closure that modifies the data structure and send it as a message to the processing thread.
#![allow(unused)] fn main() { // Create a closure that modifies the data structure. let modify_data = Box::new(|data: &mut MyDataStructure| { // Modify the data structure as needed. data.modify_something(); }); // Send the closure as a message to the processing thread. let message = Message::ModifyDataStructure(modify_data); sender.send(message).unwrap(); }
In the processing thread, receive messages from the channel and apply them to the data structure.
#![allow(unused)] fn main() { // Receive messages from the channel and apply them to the data structure. loop { match receiver.recv() { Ok(message) => { match message { Message::ModifyDataStructure(modify_data) => { // Acquire a lock on the data structure and apply the closure. let mut data = shared_data.lock().unwrap(); modify_data(&mut data); }, } }, Err(_) => break, } } }
Note that this example is simplified and doesn't handle errors, such as when sending or receiving messages fails. Also, keep in mind that message passing can have performance implications, especially for large data structures or frequent updates. You may want to consider using a combination of message passing and shared mutable state, depending on your specific requirements and constraints.
Combination of message passing and shared mutable state
This can be a good way to balance the need for communication and performance. You can use message passing to communicate high-level changes (small updates) to the data structure, and shared mutable state to allow for low-level access (large updates or initial state) and modification.
Here's an example of how you could use a combination of message passing and shared mutable state to modify a large data structure between two threads:
Define your data structure and a message type that can be used to modify it.
#![allow(unused)] fn main() { // Define your data structure. struct MyDataStructure { // ... } // Define a message type that can modify the data structure. enum Message { ModifyDataStructure(Box<dyn FnOnce(&mut MyDataStructure) + Send + 'static>), } }
Create a channel for sending messages between the gui and processing threads.
#![allow(unused)] fn main() { use std::sync::mpsc::{channel, Sender, Receiver}; // Create a channel for sending messages between threads. let (sender, receiver): (Sender<Message>, Receiver<Message>) = channel(); }
Wrap your data structure in an Arc<Mutex<T>>
. This will allow multiple threads
to share the data structure and access it safely.
#![allow(unused)] fn main() { use std::sync::{Arc, Mutex}; // Wrap your data structure in an Arc<Mutex<T>>. let shared_data = Arc::new(Mutex::new(MyDataStructure { /* ... */ })); }
In the gui thread, when you need to modify the data structure, create a closure that modifies the data structure and send it as a message to the processing thread.
#![allow(unused)] fn main() { // Create a closure that modifies the data structure. let modify_data = Box::new(|data: &mut MyDataStructure| { // Modify the data structure as needed. data.modify_something(); }); // Send the closure as a message to the processing thread. let message = Message::ModifyDataStructure(modify_data); sender.send(message).unwrap(); }
In the processing thread, receive messages from the channel and apply them to the data structure. In addition, you can acquire a lock on the Mutex to allow for low-level access and modification.
#![allow(unused)] fn main() { // Receive messages from the channel and apply them to the data structure. loop { match receiver.recv() { Ok(message) => { match message { Message::ModifyDataStructure(modify_data) => { // Acquire a lock on the data structure and apply the closure. let mut data = shared_data.lock().unwrap(); modify_data(&mut data); }, } }, Err(_) => break, } } }
Note that in the processing thread, you can also access the data structure outside of the messages by acquiring a lock on the Mutex. This will allow for low-level access and modification, without the overhead of message passing.
#![allow(unused)] fn main() { // Acquire a lock on the data structure for low-level access. let mut data = shared_data.lock().unwrap(); // Modify the data structure as needed. data.modify_something_else(); }
Using a combination of message passing and shared mutable state can be a powerful way to balance the need for communication and performance. Keep in mind that this approach requires careful synchronization and error handling, especially when modifying the data structure from multiple threads.
Read only access
Read access has the possibilities of data races.
If you're only reading the data structure, and you don't care about data race, then you generally don't need to acquire a lock. Otherwise, if you're accessing the data structure, even if only for reading, you should use a lock to synchronize access and prevent data races.
Locking for both read and write
Following shows both accesses,
#![allow(unused)] fn main() { use std::sync::Arc; use std::sync::Mutex; // Wrap your data structure in an Arc<Mutex<T>>. let shared_data = Arc::new(Mutex::new(MyDataStructure { /* ... */ })); // In the processing thread, receive messages from the channel and read the data structure. loop { match receiver.recv() { Ok(message) => { match message { Message::GetSample => { // Acquire a lock on the data structure for read-only access. let data = shared_data.lock().unwrap(); // Read the data structure as needed. let sample = data.get_sample(); // Use the sample in the processing thread. // ... }, Message::ModifyDataStructure(modify_data) => { // Acquire a lock on the data structure and apply the closure. let mut data = shared_data.lock().unwrap(); modify_data(&mut data); }, } }, Err(_) => break, } } }
In this example, the processing thread acquires a lock on the data structure for read-only access when it receives a message to get a sample, but acquires a lock for write access when it receives a message to modify the data structure. This ensures safe access to the data structure from multiple threads.
Links to materials related to data sharing
- https://youtu.be/a10JpqI-CvU
- https://forum.juce.com/t/timur-doumler-talks-on-c-audio-sharing-data-across-threads/26311/1
- https://github.com/hogliux/farbot
- https://youtu.be/7fKxIZOyBCE
- https://cfsamsonbooks.gitbook.io/explaining-atomics-in-rust/
- https://github.com/preshing/junction
- https://preshing.com/20160726/using-quiescent-states-to-reclaim-memory/
- http://www.cs.toronto.edu/~tomhart/papers/tomhart_thesis.pdf
- https://codeandbitters.com/learning-rust-crossbeam-epoch/
- https://github.com/ericseppanen/epoch_playground
- https://aturon.github.io/blog/2015/08/27/epoch/ most comprehensive explanation
- https://marabos.nl/atomics/ most comprehensive explanation
- https://www.packtpub.com/en-us/product/hands-on-concurrency-with-rust-9781788399975
- https://youtu.be/9XAx279s7gs
Powershell Scripts
Add/Remove path in path variable
Set-PathVariable {
param (
[string]$AddPath,
[string]$RemovePath
)
$regexPaths = @()
if ($PSBoundParameters.Keys -contains 'AddPath'){
$regexPaths += [regex]::Escape($AddPath)
}
if ($PSBoundParameters.Keys -contains 'RemovePath'){
$regexPaths += [regex]::Escape($RemovePath)
}
$arrPath = $env:Path -split ';'
foreach ($path in $regexPaths) {
$arrPath = $arrPath | Where-Object {$_ -notMatch "^$path\\?"}
}
$env:Path = ($arrPath + $addPath) -join ';'
}
download a file
Invoke-WebRequest -Uri <source> -OutFile <destination>
create directory if not exist.
New-Item -Force -ItemType directory -Path foo
powershell cli installation recipe from github release,
$repo = "slideshow-dist"
$binary = "slideshow-win.exe"
$curpath = [Environment]::GetEnvironmentVariable('Path', 'User')
function Set-PathVariable {
param (
[string]$AddPath,
[string]$RemovePath
)
$regexPaths = @()
if ($PSBoundParameters.Keys -contains 'AddPath'){
$regexPaths += [regex]::Escape($AddPath)
}
if ($PSBoundParameters.Keys -contains 'RemovePath'){
$regexPaths += [regex]::Escape($RemovePath)
}
$arrPath = $curpath -split ';'
foreach ($path in $regexPaths) {
$arrPath = $arrPath | Where-Object {$_ -notMatch "^$path\\?"}
}
$newpath = ($arrPath + $addPath) -join ';'
[Environment]::SetEnvironmentVariable("Path", $newpath, 'User')
}
$installPath = "~\github.kkibria"
$latest = ("https://github.com/kkibria/" + $repo + "/releases/latest")
Write-Output "Preparing install directory..."
New-Item -Path $installPath -ItemType Directory -Force | Out-Null
$f = (Convert-Path $installPath)
Write-Output ("install directory '" + $f + "' created.")
Write-Output ("{"+ $binary + " install: " + $curpath + "}") >> ($f+"\.path.backup")
Write-Output "Backing up path variable..."
Write-Output "Updating path variable..."
Set-PathVariable $f
$a = $latest+"/download/"+$binary
$b = $f+"\"+$binary
Write-Output "Downloading executable to install directory..."
Invoke-WebRequest -Uri $a -OutFile $b
Write-Output "Install complete."
Python
Virtual environment for python
The venv module provides support for creating lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in its site directories.
- First find out the python
path of the version of python you need to use, for example,
C:/Python39/python.exe
. - Go to the root folder of the project and create virtual environment called
env
for the project.
cd my_proj
C:/Python39/python.exe -m venv .venv
- Now activate the environment
./.venv/scripts/Activate
if you are developing a module then you need to have a separate staging area for the module so that we can develop and test,
mkdir lib_staging
Now we need to add this directory to the python module search path in a the file sitecustomize.py
located in .venv
directory,
.venv/sitecustomize.py
import sys, os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../lib_staging")))
You can place your modules in lib_staging area as a git sub-project or however the way you want to manage development.
To deactivate python local environment
deactivate
venv to generate requirements.txt
pip freeze > requirements.txt
To load all the libraries in virtual environment
pip install -r requirements.txt
some python request module issues with SSL
Sometimes, when you are behind a company proxy, it replaces the certificate chain with the ones of Proxy. Adding the certificates in cacert.pem used by certifi should solve the issue.
- Find the path where cacert.pem is located -
Install certifi, if you don't have. Command:
pip install certifi
import certifi
certifi.where()
C:\\Users\\[UserID]\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\certifi\\cacert.pem
- Open the URL (or base URL) on a browser. Browser will download the certificates of chain of trust from the URL. The chain looks like,
Root Authority (you probably already have this)
+-- Local Authority (might be missing)
+-- Site certificate (you don't need this)
-
You can save the whole chain as
.p7b
file, which can be opened in windows explorer. Or you can just save the Local Authority as Base64 encoded.cer
files. -
Now open the cacert.pem in a notepad and just add every downloaded certificate contents (
---Begin Certificate--- *** ---End Certificate---
) at the end.
python image manipulation as they are downloaded
import os
import io
import requests
from PIL import Image
import tempfile
buffer = tempfile.SpooledTemporaryFile(max_size=1e9)
r = requests.get(img_url, stream=True)
if r.status_code == 200:
downloaded = 0
filesize = int(r.headers['content-length'])
for chunk in r.iter_content(chunk_size=1024):
downloaded += len(chunk)
buffer.write(chunk)
print(downloaded/filesize)
buffer.seek(0)
i = Image.open(io.BytesIO(buffer.read()))
i.save(os.path.join(out_dir, 'image.jpg'), quality=85)
buffer.close()
- https://stackoverflow.com/questions/37751877/downloading-image-with-pil-and-requests
- https://www.kite.com/python/answers/how-to-read-an-image-data-from-a-url-in-python
Range header in python
Large downloads are sometimes interrupted. However, a good HTTP server that supports the Range header lets you resume the download from where it was interrupted. The standard Python module urllib lets you access this functionality almost seamlessly. You need to add only the needed header and intercept the error code the server sends to confirm that it will respond with a partial file:
import urllib, os
class myURLOpener(urllib.FancyURLopener):
""" Subclass to override error 206 (partial file being sent); okay for us """
def http_error_206(self, url, fp, errcode, errmsg, headers, data=None):
pass # Ignore the expected "non-error" code
def getrest(dlFile, fromUrl, verbose=0):
loop = 1
existSize = 0
myUrlclass = myURLOpener( )
if os.path.exists(dlFile):
outputFile = open(dlFile,"ab")
existSize = os.path.getsize(dlFile)
# If the file exists, then download only the remainder
myUrlclass.addheader("Range","bytes=%s-" % (existSize))
else:
outputFile = open(dlFile,"wb")
webPage = myUrlclass.open(fromUrl)
if verbose:
for k, v in webPage.headers.items( ):
print k, "=", v
# If we already have the whole file, there is no need to download it again
numBytes = 0
webSize = int(webPage.headers['Content-Length'])
if webSize == existSize:
if verbose: print "File (%s) was already downloaded from URL (%s)"%(
dlFile, fromUrl)
else:
if verbose: print "Downloading %d more bytes" % (webSize-existSize)
while 1:
data = webPage.read(8192)
if not data:
break
outputFile.write(data)
numBytes = numBytes + len(data)
webPage.close( )
outputFile.close( )
if verbose:
print "downloaded", numBytes, "bytes from", webPage.url
return numbytes
The HTTP Range header lets the web server know that you want only a certain range of data to be downloaded, and this recipe takes advantage of this header. Of course, the server needs to support the Range header, but since the header is part of the HTTP 1.1 specification, it’s widely supported. This recipe has been tested with Apache 1.3 as the server, but I expect no problems with other reasonably modern servers.
The recipe lets urllib.FancyURLopener
to do all the hard work of adding a new
header, as well as the normal handshaking. I had to subclass it to make it known
that the error 206 is not really an error in this case—so you can proceed
normally. I also do some extra checks to quit the download if I’ve already
downloaded the whole file.
yaml file
- https://pyyaml.org/wiki/PyYAMLDocumentation
- https://www.andrewvillazon.com/validate-yaml-python-schema/
Python library
Refactoring library
Often time we have to move library functions from files to files to reorganize. just to make sure they all functions are same between old set of files to new set of files following can be used,
compare_mods.py
import importlib
a = '''
chill_test
'''
b = '''
src.test.libtest1
books.book1.ch7
books.book1.ch8
books.book1.ch2_4
books.book1.ch5_6
experiment
'''
def funcs(mods: str) -> set:
l = mods.strip().split()
mods = map(importlib.import_module, l)
dirs = map(dir, mods)
return set([i for d in dirs for i in d])
sa = funcs(a)
sb = funcs(b)
print(sa)
print(sb)
print (sa ^ sb)
where a is the list of old python modules and b is the new ones.
Python object
it is easy to create an object and attributes.
obj = lambda: None
obj.exclude = 'hello'
Turning a dict to object
attr = {"a" : "b", "c": "d", "e": "f"}
obj = lambda: None
obj.__dict__ = attr
print(obj.a)
print(obj.c)
print(obj.e)
Java to Go language
I have written large Java projects in the past, I have used proprietary tools to turn them into native executable. The Go language provides a nice alternative with several important advantages. Now I am looking into some automated way to convert may Java codes into Go with minimal manual conversions. LLVM, antlr and other tools I am looking at. This will be a work in progress for near future.
Links
- https://github.com/dglo/java2go.
- https://github.com/andrewarrow/traot.
- https://talks.golang.org/2015/go-for-java-programmers.slide.
- https://talks.golang.org/2015/go-for-java-programmers.
- https://talks.golang.org.
LLVM
- A Brief Introduction to LLVM.
- Introduction to LLVM Building simple program analysis tools and instrumentation.
Polyglot
GRAAL
Some info on go
- Default go library install path for a user in windows,
%UserProfile%\go
directory. Check Windows special directories/shortcuts for similar paths in windows. - Pigeon is a PEG based parser generator in go. We can use antlr java grammar from and convert in a Pigeon gammer file and use Pigeon to parse. Check Documentation.
Use antlr and keep everything in Java world to translate
Use antlr to generate a java parser in with golang
I prefer this option. This seems to be the path of lease resistance for now.
- Parsing with ANTLR 4 and Go. The author has already created golang parsers for all available grammars in (Github)[https://github.com/bramp/antlr4-grammars.git].
Manual porting
Edited excerpts from the article,
String vs. string
In Java, String
is an object
that really is a reference (a pointer). As a result, a string can be null
.
In Go string
is a value type.
It can't be nil
, only empty.
Idea: mechanically replace
null
with""
.
Errors vs. exceptions
Java uses exceptions to communicate errors.
Go returns values of error
interface.
Idea: Change function signatures to return error values and propagate them up the call stack.
Generics
Go doesn't have Generics. Porting generic APIs was the biggest challenge.
Here's an example of a generic method in Java:
public <T> T load(Class<T> clazz, String id) {
And the caller:
Foo foo = load(Foo.class, "id")
Two strategies can be useful.
One is to use interface{}
,
which combines value and its type, similar to object
in Java. This is not preferred
approach. While it works, operating on interface{}
is clumsy for the user of the
library.
The other is to use reflection and the above code was ported as:
func Load(result interface{}, id string) error
Reflection to query type of
result
can be used to create values of that type from JSON document.
And the caller side:
var result *Foo
err := Load(&result, "id")
Function overloading
Go doesn't have overloading.
Java:
void foo(int a, String b) {}
void foo(int a) { foo(a, null); }
In go write 2 functions instead:
func foo(a int) {}
func fooWithB(a int, b string) {}
When number of potential arguments was large use a struct:
type FooArgs struct {
A int
B string
}
func foo(args *FooArgs) { }
Inheritance
Go is not especially object-oriented and doesn't have inheritance. Simple cases of inheritance can be ported with embedding.
class B : A { }
Can sometimes be ported as:
type A struct { }
type B struct {
A
}
We've embedded A
inside
B
, so B
inherit all the methods and fields of A
.
It doesn't work for virtual functions.
There is no good way to directly port code
that uses virtual functions.
One option to emulate virtual function is to
use embedding of structs and function pointers. This essentially re-implements virtual table
that Java gives you for free as part of object
implementation.
Another option is to write a stand-alone function that dispatches the right function for a given type by using type switch.
Interfaces
Both Java and Go have interfaces but they are different things. You can create a Go interface type that replicated Java interface.
Or just dont use interfaces, instead use exposed concrete structs in the API.
Circular imports between packages
Java allows circular imports between packages. Go does not. As a result you will not able to replicate the package structure of Java code. Restructuring will be needed.
Private, public, protected
Go simplified access by only having public vs. private and scoping access to package level.
Concurrency
Go's concurrency is simply the best and a built-in race detector is of great help in repelling concurrency bugs.
Mechanical translation will require restructuring it to be more Go idiomatic.
Fluent function chaining
Java has function chaining like this,
List<ReduceResult> results = session.query(User.class)
.groupBy("name")
.selectKey()
.selectCount()
.orderByDescending("count")
.ofType(ReduceResult.class)
.toList();
This only works in languages that communicate errors via exceptions. When a function additionally returns an error, it's no longer possible to chain it like that.
To replicate chaining in Go, "stateful error" approach would be useful:
type Query struct {
err error
}
func (q *Query) WhereEquals(field string, val interface{}) *Query {
if q.err != nil {
return q
}
// logic that might set q.err
return q
}
func (q *Query) GroupBy(field string) *Query {
if q.err != nil {
return q
}
// logic that might set q.err
return q
}
func (q *Query) Execute(result interface{}) error {
if q.err != nil {
return q.err
}
// do logic
}
This can be chained:
var result *Foo
err := NewQuery().WhereEquals("Name", "Frank").GroupBy("Age").Execute(&result)
Go code is shorter
This is not so much a property of Java but the culture which dictates what is considered an idiomatic code.
In Java setter and getter methods are common. As a result, Java code:
class Foo {
private int bar;
public void setBar(int bar) {
this.bar = bar;
}
public int getBar() {
return this.bar;
}
}
ends up in Go as:
type Foo struct {
Bar int
}
Java to Javascript language
j2cl
Google transpiler j2cl transpile and uses closure to clean the produced javascript. It is written in java.
Python to Go language
grumpy
grumpy is by google, but it supports python 2.7 only. I am exploring my own python 3.x transpiler solution. Work in progress.
Python to Javascript language
I have written large Python projects in the past, looking into converting some to javascript.
Links
setup transcrypt
We have to use python virtual environment for transcrypt
to function correctly.
Now install transcrypt
pip install transcrypt
converting to javascript
have a python file
test-trans.py
import os
from pathlib import Path
inifile = os.path.join(Path.home(), ".imagesel", "config.ini")
print(inifile)
now test it with python
py test-trans.py
in order to use os and pathlib we need to replace them with equivalent javascript calls
using stubs. we will create stubs in dir called stubs
these are special python files that can use javascript libraries.
os.py
p = require('path')
class path:
def join(*args):
return p.join(*args)
pathlib.py
os = require ('os')
class Path:
def home():
return os.homedir()
Now we can convert them to javascript
python -m transcrypt -b -n ..\test-trans.py
This will produce a directory called __target__
with all the javascript files in it.
converting to a node bundle
Now initialize node environment and install rollup
npm init // accept defaults
npm i rollup
Next we need to use rollup to bundle them for node,
node_modules\.bin\rollup .\__target__\test-trans.js --o bundle.js --f cjs
Now test it with node
node bundle.js
We get the same result.
Compilers/Parsers
Earley v/s PEG parsers
Earley and PEG parsers are two different parsing algorithms with some similarities and differences.
Earley parser is a general-purpose parsing algorithm that can parse any context-free grammar. It uses a dynamic programming technique to predict, scan, and complete input tokens to construct a parse tree. Earley parser can handle ambiguous grammars and provides the most expressive power, but it can also be slow for large grammars or inputs.
PEG (Parsing Expression Grammar) parser, on the other hand, is a top-down parsing algorithm that matches input against a set of parsing expressions, which are similar to regular expressions but with more features. PEG parser prioritizes parsing rules and avoids ambiguities by always choosing the first matching rule. PEG parsers can be fast and efficient for parsing structured text, but they may not be suitable for complex grammars that require backtracking or lookahead.
Here are some key differences between Earley and PEG parsers:
- Earley parser can handle arbitrary context-free grammars, whereas PEG parser can only handle a subset of context-free grammars that are deterministic and unambiguous.
- Earley parser uses a bottom-up parsing approach, whereas PEG parser uses a top-down parsing approach.
- Earley parser can handle ambiguity by constructing multiple parse trees, whereas PEG parser resolves ambiguity by prioritizing rules and not backtracking.
In summary, Earley parser is more general and powerful but can be slower and more memory-intensive than PEG parser. PEG parser is more limited in scope but can be faster and easier to use for certain types of parsing tasks.
More on parsing concepts
Python based parsers
Java based parsers
todo: vscode setup for antlr.
Rust based parsers
- Writing Interpreters in Rust: a Guide
- Lisp interpreter in Rust
- Wrapper for Exposing LLVM
- LR(1) parser generator for Rust
- Building a compiler in Rust videos, part1 and part2
- Create Your Own Programming Language with Rust
- Rust Compiler Development Guide
- PEG parser generator for Rust
Peg Parsers
PEG and Packrat
A Packrat parser is a memoizing variant of PEG that can avoid redundant computation by caching intermediate parsing results. This memoization helps to handle backtracking efficiently and can also improve the performance of parsing long input sequences.
Articles
- Wikipedia Parsing expression grammar.
- PEG Parsing Series Overview, YouTube "Writing a PEG parser for fun and profit" - Guido van Rossum (North Bay Python 2019).
- Packrat Parsing.
Typescript
Javascript
Earley parsers
Javascript
- https://nearley.js.org/. Use it with lexer https://github.com/no-context/moo
Linux from scratch
Projects using LFS
- Linux From scratch build scripts
- Use dpkg (.deb) package management on LFS 6.3
- Fakeroot approach for package installation
- LFS cross compile for arm
- Making your own Linux distribution for the Raspberry Pi
- The Linux Documentation Project
- https://www.tldp.org/HOWTO/Program-Library-HOWTO/index.html
- Cross Linux From Scratch (CLFS) on the Raspberry Pi
- someones journal of building LFS
- Building GCC as a cross compiler for Raspberry Pi
- Docker configuration for building Linux From Scratch system
LFS with wsl2
Get the LFS book. This book provides step by step guide to build an LFS system. Following will provide steps for chapter 1 & 2.
LFS (Part 1)
Chapter 1 and 2, Setup and Disc image
We will be needing few packages that we will install if they are not installed,
sudo apt-get install subversion xsltproc
run setup.sh which will create sourceMe.sh
in the current folder. It will also download the book source to create other scripts and files.
sh /mnt/c/Users/<user>/Documents/linux/LFS/lfs-scripts/setup.sh
Now source sourceMe.sh
to mount the scripts
$ source sourceMe.sh
Mounting /mnt/c/Users/<user>/Documents/linux/LFS/lfs-scripts on lfs-scripts
umount: lfs-scripts: not mounted.
sh: mount-lfs.sh: No such file or directory
Check the requirements for LFS,
sh lfs-scripts/version-check.sh
Change shell to bash from dash if necessary,
sh lfs-scripts/ch2bash.sh
Let us create an empty disk image with two partitions,
sh lfs-scripts/mkdiscimg.sh
This will create lfs.img
with two partitions and a script mount-lfs.sh
to mount the image. Check the image,
$ fdisk -lu lfs.img
Disk lfs.img: 10.26 GiB, 11010048000 bytes, 21504000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0e0d22a7
Device Boot Start End Sectors Size Id Type
lfs.img1 8192 532479 524288 256M c W95 FAT32 (LBA)
lfs.img2 532480 21503999 20971520 10G 83 Linux
Now source sourceMe.sh
the second time to mount the image.
From now on, you can source sourceMe.sh
before you start working to have everything setup after you boot between steps if necessary.
We are ready to go with chapter 3.
Chapter 3, Get sources
When we ran setup.sh
, it downloaded the book and created,
packages.sh
patches.sh
wget-list
md5sums
Feel free to examine them.
Lets proceed to download the sources,
sudo mkdir -v $LFS/sources
sudo chmod -v a+wt $LFS/sources
wget --input-file=wget-list --continue --directory-prefix=$LFS/sources
cp md5sums $LFS/sources
pushd $LFS/sources
md5sum -c md5sums
popd
Note that if a download fails you have to find an alternate source by googling and then adjust wget-list
.
At the time of this writing mpfr url had to be changed to https://ftp.gnu.org/gnu/mpfr/mpfr-4.0.2.tar.xz
.
Writing a makefile using
packages.sh
andpatches.sh
could be an alternative.
We are ready to go with chapter 4.
Chapter 4, Setup user to build toolchain
create user lfs
and set permissions,
sudo mkdir -v $LFS/tools
sudo ln -sv $LFS/tools /
sudo groupadd lfs
sudo useradd -s /bin/bash -g lfs -m -k /dev/null lfs
sudo passwd lfs
sudo chown -v lfs $LFS/tools
sudo chown -v lfs $LFS/sources
Login as lfs
,
su - lfs
Setup lfs
's environment,
cat > ~/.bash_profile << "EOF"
exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash
EOF
cat > ~/.bashrc << "EOF"
set +h
umask 022
LFS=/mnt/lfs
LC_ALL=POSIX
LFS_TGT=$(uname -m)-lfs-linux-gnu
PATH=/tools/bin:/bin:/usr/bin
export LFS LC_ALL LFS_TGT PATH
EOF
source ~/.bash_profile
This concludes chapter 4. Go to Part 2 for next chapters.
LFS (Part 2)
Chapter 5, Build Toolchain
In chapter 5, prepare lib
folders and go to sources
folder,
case $(uname -m) in
x86_64) mkdir -v /tools/lib && ln -sv lib /tools/lib64 ;;
esac
cd $LFS/sources
Build pass 1 binutils
tar -xvf binutils-2.34.tar.xz
pushd binutils-2.34
mkdir -v build
cd build
../config.guess
../configure --prefix=/tools \
--with-sysroot=$LFS \
--with-lib-path=/tools/lib \
--target=$LFS_TGT \
--disable-nls \
--disable-werror
make
make install
# if everything went fine we can remove
popd
rm -rf binutils-2.34
build pass 1 gcc
tar -xvf gcc-9.2.0.tar.xz
pushd gcc-9.2.0
tar -xf ../mpfr-4.0.2.tar.xz
mv -v mpfr-4.0.2 mpfr
tar -xf ../gmp-6.2.0.tar.xz
mv -v gmp-6.2.0 gmp
tar -xf ../mpc-1.1.0.tar.gz
mv -v mpc-1.1.0 mpc
for file in gcc/config/{linux,i386/linux{,64}}.h
do
cp -uv $file{,.orig}
sed -e 's@/lib\(64\)\?\(32\)\?/ld@/tools&@g' \
-e 's@/usr@/tools@g' $file.orig > $file
echo '
#undef STANDARD_STARTFILE_PREFIX_1
#undef STANDARD_STARTFILE_PREFIX_2
#define STANDARD_STARTFILE_PREFIX_1 "/tools/lib/"
#define STANDARD_STARTFILE_PREFIX_2 ""' >> $file
touch $file.orig
done
case $(uname -m) in
x86_64)
sed -e '/m64=/s/lib64/lib/' \
-i.orig gcc/config/i386/t-linux64
;;
esac
mkdir -v build
cd build
../configure \
--target=$LFS_TGT \
--prefix=/tools \
--with-glibc-version=2.11 \
--with-sysroot=$LFS \
--with-newlib \
--without-headers \
--with-local-prefix=/tools \
--with-native-system-header-dir=/tools/include \
--disable-nls \
--disable-shared \
--disable-multilib \
--disable-decimal-float \
--disable-threads \
--disable-libatomic \
--disable-libgomp \
--disable-libquadmath \
--disable-libssp \
--disable-libvtv \
--disable-libstdcxx \
--enable-languages=c,c++
# take a cup of coffee and relax
make
make install
popd
rm -rf gcc-9.2.0
Install linux headers
tar -xvf linux-5.5.3.tar.xz
pushd linux-5.5.3
make mrproper
make headers
cp -rv usr/include/* /tools/include
popd
rm -rf linux-5.5.3
Build Glibc
tar -xvf glibc-2.31.tar.xz
pushd glibc-2.31
mkdir -v build
cd build
../configure \
--prefix=/tools \
--host=$LFS_TGT \
--build=$(../scripts/config.guess) \
--enable-kernel=3.2 \
--with-headers=/tools/include
make
make install
popd
rm -rf glibc-2.31
Test the build,
mkdir test
pushd test
echo 'int main(){}' > dummy.c
$LFS_TGT-gcc dummy.c
readelf -l a.out | grep ': /tools'
popd
rm -rf test
This should produce output,
[Requesting program interpreter: /tools/lib64/ld-linux-x86-64.so.2]
Note that for 32-bit machines, the interpreter name will be /tools/lib/ld-linux.so.2.
Build Libstdc++
tar -xvf gcc-9.2.0.tar.xz
pushd gcc-9.2.0
mkdir -v build
cd build
../libstdc++-v3/configure \
--host=$LFS_TGT \
--prefix=/tools \
--disable-multilib \
--disable-nls \
--disable-libstdcxx-threads \
--disable-libstdcxx-pch \
--with-gxx-include-dir=/tools/$LFS_TGT/include/c++/9.2.0
make
make install
popd
rm -rf gcc-9.2.0
Build pass 2 binutils
tar -xvf binutils-2.34.tar.xz
pushd binutils-2.34
mkdir -v build
cd build
CC=$LFS_TGT-gcc \
AR=$LFS_TGT-ar \
RANLIB=$LFS_TGT-ranlib \
../configure \
--prefix=/tools \
--disable-nls \
--disable-werror \
--with-lib-path=/tools/lib \
--with-sysroot
make
make install
make -C ld clean
make -C ld LIB_PATH=/usr/lib:/lib
cp -v ld/ld-new /tools/bin
popd
rm -rf binutils-2.34
Build pass 2 gcc
tar -xvf gcc-9.2.0.tar.xz
pushd gcc-9.2.0
tar -xf ../mpfr-4.0.2.tar.xz
mv -v mpfr-4.0.2 mpfr
tar -xf ../gmp-6.2.0.tar.xz
mv -v gmp-6.2.0 gmp
tar -xf ../mpc-1.1.0.tar.gz
mv -v mpc-1.1.0 mpc
cat gcc/limitx.h gcc/glimits.h gcc/limity.h > \
`dirname $($LFS_TGT-gcc -print-libgcc-file-name)`/include-fixed/limits.h
for file in gcc/config/{linux,i386/linux{,64}}.h
do
cp -uv $file{,.orig}
sed -e 's@/lib\(64\)\?\(32\)\?/ld@/tools&@g' \
-e 's@/usr@/tools@g' $file.orig > $file
echo '
#undef STANDARD_STARTFILE_PREFIX_1
#undef STANDARD_STARTFILE_PREFIX_2
#define STANDARD_STARTFILE_PREFIX_1 "/tools/lib/"
#define STANDARD_STARTFILE_PREFIX_2 ""' >> $file
touch $file.orig
done
case $(uname -m) in
x86_64)
sed -e '/m64=/s/lib64/lib/' \
-i.orig gcc/config/i386/t-linux64
;;
esac
sed -e '1161 s|^|//|' \
-i libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc
# take a cup of coffee and relax
make
make install
ln -sv gcc /tools/bin/cc
popd
rm -rf gcc-9.2.0
Test the build,
mkdir test
pushd test
echo 'int main(){}' > dummy.c
cc dummy.c
readelf -l a.out | grep ': /tools'
popd
rm -rf test
This should produce output,
[Requesting program interpreter: /tools/lib64/ld-linux-x86-64.so.2]
Note that for 32-bit machines, the interpreter name will be /tools/lib/ld-linux.so.2.
Build Tcl
tar -xvf tcl8.6.10-src.tar.gz
pushd tcl8.6.10
cd unix
./configure --prefix=/tools
make
TZ=UTC make test
make install
chmod -v u+w /tools/lib/libtcl8.6.so
make install-private-headers
ln -sv tclsh8.6 /tools/bin/tclsh
popd
rm -rf tcl8.6.10
Build expect
tar -xvf expect5.45.4.tar.gz
pushd expect5.45.4
cp -v configure{,.orig}
sed 's:/usr/local/bin:/bin:' configure.orig > configure
make
make test
make SCRIPTS="" install
popd
rm -rf expect5.45.4
build DejaGNU
tar -xvf dejagnu-1.6.2.tar.gz
pushd dejagnu-1.6.2
./configure --prefix=/tools
make install
make check
popd
rm -rf dejagnu-1.6.2
build M4
tar -xvf m4-1.4.18.tar.xz
pushd m4-1.4.18
sed -i 's/IO_ftrylockfile/IO_EOF_SEEN/' lib/*.c
echo "#define _IO_IN_BACKUP 0x100" >> lib/stdio-impl.h
./configure --prefix=/tools
make
make check
make install
popd
rm -rf m4-1.4.18
build Ncurses
tar -xvf ncurses-6.2.tar.gz
pushd ncurses-6.2
sed -i s/mawk// configure
./configure --prefix=/tools \
--with-shared \
--without-debug \
--without-ada \
--enable-widec \
--enable-overwrite
make
make install
ln -s libncursesw.so /tools/lib/libncurses.so
popd
rm -rf ncurses-6.2
build bash
tar -xvf bash-5.0.tar.gz
pushd bash-5.0
./configure --prefix=/tools --without-bash-malloc
make
make tests
make install
ln -sv bash /tools/bin/sh
popd
rm -rf bash-5.0
build bison
tar -xvf bison-3.5.2.tar.xz
pushd bison-3.5.2
./configure --prefix=/tools
make
make check
make install
popd
rm -rf bison-3.5.2
build Bzip2
tar -xvf bzip2-1.0.8.tar.gz
pushd bzip2-1.0.8
make -f Makefile-libbz2_so
make clean
make
make PREFIX=/tools install
cp -v bzip2-shared /tools/bin/bzip2
cp -av libbz2.so* /tools/lib
ln -sv libbz2.so.1.0 /tools/lib/libbz2.so
popd
rm -rf bzip2-1.0.8
build Coreutils
tar -xvf coreutils-8.31.tar.xz
pushd coreutils-8.31
./configure --prefix=/tools --enable-install-program=hostname
make
make RUN_EXPENSIVE_TESTS=yes check
make install
popd
rm -rf coreutils-8.31
build Diffutils
tar -xvf diffutils-3.7.tar.xz
pushd diffutils-3.7
./configure --prefix=/tools
make
make check
make install
popd
rm -rf diffutils-3.7
build File
tar -xvf file-5.38.tar.gz
pushd file-5.38
./configure --prefix=/tools
make
make check
make install
popd
rm -rf file-5.38
build Findutils
tar -xvf findutils-4.7.0.tar.xz
pushd findutils-4.7.0
./configure --prefix=/tools
make
make check
make install
popd
rm -rf findutils-4.7.0
build gawk
tar -xvf gawk-5.0.1.tar.xz
pushd gawk-5.0.1
./configure --prefix=/tools
make
make check
make install
popd
rm -rf gawk-5.0.1
build gettext
tar -xvf gettext-0.20.1.tar.xz
pushd gettext-0.20.1
./configure --disable-shared
make
cp -v gettext-tools/src/{msgfmt,msgmerge,xgettext} /tools/bin
popd
rm -rf gettext-0.20.1
build grep
tar -xvf grep-3.4.tar.xz
pushd grep-3.4
./configure --prefix=/tools
make
make check
make install
popd
rm -rf grep-3.4
build gzip
tar -xvf gzip-1.10.tar.xz
pushd gzip-1.10
./configure --prefix=/tools
make
make check
make install
popd
rm -rf gzip-1.10
build make
tar -xvf make-4.3.tar.gz
pushd make-4.3
./configure --prefix=/tools --without-guile
make
make check
make install
popd
rm -rf make-4.3
build patch
tar -xvf patch-2.7.6.tar.xz
pushd patch-2.7.6
./configure --prefix=/tools
make
make check
make install
popd
rm -rf patch-2.7.6
build perl
tar -xvf perl-5.30.1.tar.xz
pushd perl-5.30.1
sh Configure -des -Dprefix=/tools -Dlibs=-lm -Uloclibpth -Ulocincpth
make
cp -v perl cpan/podlators/scripts/pod2man /tools/bin
mkdir -pv /tools/lib/perl5/5.30.1
cp -Rv lib/* /tools/lib/perl5/5.30.1
popd
rm -rf perl-5.30.1
build python
tar -xvf Python-3.8.1.tar.xz
pushd Python-3.8.1
sed -i '/def add_multiarch_paths/a \ return' setup.py
./configure --prefix=/tools --without-ensurepip
make
make install
popd
rm -rf Python-3.8.1
build sed
tar -xvf sed-4.8.tar.xz
pushd sed-4.8
./configure --prefix=/tools
make
make check
make install
popd
rm -rf sed-4.8
build tar
tar -xvf tar-1.32.tar.xz
pushd tar-1.32
./configure --prefix=/tools
make
make check
make install
popd
rm -rf tar-1.32
build Texinfo
tar -xvf texinfo-6.7.tar.xz
pushd texinfo-6.7
./configure --prefix=/tools
make
make check
make install
popd
rm -rf texinfo-6.7
build Xz
tar -xvf xz-5.2.4.tar.xz
pushd xz-5.2.4
./configure --prefix=/tools
make
make check
make install
popd
rm -rf xz-5.2.4
Wrap up
Free 3gb space,
strip --strip-debug /tools/lib/*
/usr/bin/strip --strip-unneeded /tools/{,s}bin/*
rm -rf /tools/{,share}/{info,man,doc}
find /tools/{lib,libexec} -name \*.la -delete
# go back to original login
exit
Back up the toolchain,
sh bkuptc.sh
It will create tools.tar.gz
in the backup
folder in /mnt/c/Users/<user>/Documents/linux/LFS
.
Change ownership to root,
sudo chown -R root:root $LFS/tools
This concludes chapter 5. Go to Part 3 for next chapters.
LFS (Part 3)
Chapter 6, Build root filesystem
Cross Compiling
cross compile
compiling for windows in linux
some ideas.....
build LFS using the scheme in https://github.com/LeeKyuHyuk/PiCLFS but change the compiler using scheme in https://github.com/Pro/raspi-toolchain
build missing libraries
Custom O/S
Concepts
Using Raspberry Pi
Pi Documentation
- https://www.raspberrypi.org/documentation/.
- https://www.raspberrypi.org/documentation/configuration/.
- https://pifi.imti.co/.
- https://youtu.be/qeHpXVUwI08
- https://youtu.be/RlgLIr2gZFg
IOT
If we want to make an IOT with Pi, we will need to setup a headless pi first. We will use raspberry pi zero W since it has built-in wireless which can be used to network for development as well as connecting the device to the internet without additional hardware.
Setup for development
We will use a PC to do code editing and run code to test during development. We will setup the wifi to connect the pi to a network that the PC is connected to.
Setup for headless wifi and USB networking
First burn the minimal boot image to SD card using the PC. After the image is prepared,
take out the and reinsert the SD card in the PC to make the new filesystem visible. Now go to the root directory
of the SD.
First we will setup the wifi networking. Create following two files in the disk image root directory.
wpa_supplicant.conf
.ssh
.
The wpa_supplicant.conf
file should contain the following content,
country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="Wifi-Network-Name"
psk="Wifi-Network-Password"
key_mgmt=WPA-PSK
}
The ssh
file should be empty. This will enable incoming ssh connections into pi.
These two files will setup the config during boot and then will be deleted after boot, but we will not boot it yet.
Next we will setup USB networking.
Use a text editor to edit config.txt
in the same directory.
Go to the bottom and add,
dtoverlay=dwc2
Save the config.txt
.
Now edit cmdline.txt
file and insert after rootwait
(the last word on the first line) add a space and then modules-load=dwc2,g_ether
. Note that this line is a very long line.
... rootwait modules-load=dwc2,g_ether ...
Save the cmdline.txt
Insert the SD in pi. Now we can use
USB networking to ssh into pi for development. First make sure that there is
no power cable is connected to the Pi. Simply plug in the Pi USB OTG port to a PC or laptop with a cable. PC will recognize the
Pi device power it thru the cable. After the boot completed, you can ssh into pi@raspberrypi.local
using the default password raspberry
.
You can also ssh thru wifi. Detach the cable from computer. Plug in the power cable to the power port and turn power on. After the boot completed, we can connect to headless pi thru ssh from the computer on the wifi network.
You should change the default password on this first boot to something else.
Secure the ssh
Now that you are connected to pi via ssh,
it is best to setup key-based authentication instead of using password for ssh at this point to make it more secure.
Key pairs are two cryptographically secure keys and extremely difficult to break. One is private, and one is public. These keys are stored by default in the .ssh
folder in your home directory on the PC. The private key will be called id_rsa
and the associated public key will be called id_rsa.pub
. If you don't have those files already, simply use ssh-keygen
command to generate them.
We will need to copy the public key to Raspberry Pi. Run the following commands on Pi over ssh,
mkdir -p ~/.ssh
echo >> ~/.ssh/authorized_keys
Next, we will put the key in ~/.ssh/authorized_keys
using nano,
nano ~/.ssh/authorized_keys
id_rsa.pub
is just a text file, open it on your PC and copy the entire
content and paste it in nano at the end of the ~/.ssh/authorized_keys
and save.
Now log in again using ssh using another terminal. If it didn't ask for password then
we have successfully set up the keys.
We can safely disable password logins now,
so that all authentication is done by only the key pairs without locking us out.
On pi we will change /etc/ssh/sshd_config
,
sudo nano /etc/ssh/sshd_config
There are three lines that need to be changed to no
, if they are not set that way already,
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no
Save the file and either restart the ssh system with sudo service ssh reload
or reboot. Now you should be able to do ssh
into pi@raspberrypi.local
from the authorized PC only and you will not need to enter any password.
Create a Samba share
We will use code editor on the PC to edit files directly on the pi. We will install Samba to do this. Samba is available in Raspbian’s standard software repositories. We’re going to update our repository index, make sure our operating system is fully updated, and install Samba using apt-get. In ssh terminal and type:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install samba samba-common-bin
We’re going to create a dedicated shared directory on our Pi’s SD.
sudo mkdir -m 1777 /home/pi/devcode
This command sets the sticky bit (1) to help prevent the directory from being accidentally deleted and gives read/write/execute (777) permissions on it.
Edit Samba’s config files to make the file share visible to the Windows PCs on the network.
sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.orig
sudo nano /etc/samba/smb.conf
In our example, you’ll need to add the following entry:
[devcode]
Comment = Pi shared folder
path = /home/pi/devcode
writeable = yes
create mask = 0750
directory mask = 0750
valid users = pi
public = no
Make sure that the path
points to a folder that has read write permission for all thevalid users
.
If you want to force a user or group when you write a file in your samba share you can use following,
[devcode]
...
force user = user1
force group = group1
It is also best to comment out anything that you don't need to use such printers
or home
sections.
Before we start the server, you’ll want to set a Samba password - this is not the same as your standard default password (raspberry), but there’s no harm in reusing this if you want to, as this is a low-security, local network project.
sudo smbpasswd -a pi
Then set a password as prompted.
Finally, let’s restart Samba:
sudo /etc/init.d/smbd restart
From now on, Samba will start automatically whenever you power on your Pi. From you windows PC file explorer you can connect to \\raspberrypi
and map devcode
to a drive letter. You can do rest of your development using popular vscode
or any other editor from your PC on the newly created drive.
vscode for development
It is quite easy to setup. Read Visual Studio Code Remote Development over SSH to a Raspberry Pi is butter. Unfortunately, pi zero does not work because the microsoft's vscode remote server is not compiled for armv6 only supports armv7. I am not sure if the source code is available for one to re-compile for armv6.
This kind of capability has been done for atom called atom-remote what uses rmate (remote for another editor called textmate). They do the editing ove ssh. There is also rmate extension for vscode, https://github.com/rafaelmaiolla/remote-vscode. More reading to do for sure.
Configure IOT setup mechanism by user
If we build IOT device, it needs to be configured. For example the user needs to setup the wifi connection information so that it can be connected to internet. The question is, how do we set it up with a PC or cell phone and input those setup?
The basic strategy is to setup up a web page that collects the configuration data. We will need to setup a web server first to produce the interface.
Once done, we can scan wifi networks from pi to get all the available access points. Fot instance we can following shell command to scan and return the result.
sudo iw wlan0 scan
We can use returned info in the configuration webpage for the user to select a a wifi connection and provide password.
Installing the web server
We will be using instructions from Installing Lighttpd with Python CGI support.
Install lighttpd
web server
sudo apt-get install lighttpd
Create a directory for the content
mkdir -p /home/pi/devcode/httpd/cgi-bin
cp -Rv /var/www/* /home/pi/devcode/httpd
sudo chown -R www-data /home/pi/devcode/httpd
sudo chgrp -R www-data /home/pi/devcode/httpd
find /home/pi/devcode/httpd -type d -exec sudo chmod g+ws {} \;
sudo adduser pi www-data
We will edit /etc/lighttpd/lighttpd.conf
with nano
to update server configuration.
sudo nano /etc/lighttpd/lighttpd.conf
We will change document root in /etc/lighttpd/lighttpd.conf
,
server.document-root = "/home/pi/devcode/httpd/public"
We will Append following to the end of /etc/lighttpd/lighttpd.conf
to enable cgi,
server.modules += ( "mod_cgi", "mod_setenv" )
static-file.exclude-extensions += ( ".py", ".sh", )
$HTTP["url"] =~ "^/cgi-bin/" {
alias.url = ( "/cgi-bin/" => "/home/pi/devcode/httpd/cgi-bin/" )
cgi.assign = (
".py" => "/usr/bin/python3",
".pl" => "/usr/bin/perl",
".sh" => "/bin/sh",
)
setenv.set-environment = ( "PYTHONPATH" => "/home/pi/devcode/httpd/lib" )
}
server.modules += ("mod_rewrite")
url.rewrite-once = ( "^/json-api/(.*)\.json" => "/cgi-bin/$1.py" )
This example also will setup search path for any python custom module and any url rewrite you may need.
Restart the server
sudo service lighttpd restart
Now we can put static contents in httpd/html
directory and all the handlers
in httpd/cgi-bin
directory. Go ahead, test the server from a web browser with some static content and cgi.
custom fastcgi for lighttpd
- https://github.com/jerryvig/lighttpd-fastcgi-c
- https://docs.rs/fastcgi/1.0.0/fastcgi/
- https://dafyddcrosby.com/rust-dreamhost-fastcgi/
Using privileged commands in CGI
The web server cgi scripts may need to run commands with root permission. This can be allowed by
updating sudo permission for the server for specific commands. For instance, we can scan the wifi networks using /sbin/iw
command
running with root permission. You can edit the permission by running,
sudo visudo
This will bring up nano
with the permission file. Add following line at the end of file,
%www-data ALL=NOPASSWD: /sbin/iw
Now save the file. You can add more than one commands in comma separated paths if needed. Check documentation for visudo
.
Idea 1: Configure via wifi
Set it up initially as a wifi access point on power up. Then use it to setup up the configuration.
Perhaps We can run both ap and client at the same time? Or a reset switch to select the mode. Or we can use some other algorithmic way turn on the access point. We can use captive portal to show the user interface.
- https://www.raspberrypi.org/forums/viewtopic.php?t=211542.
- https://serverfault.com/questions/869857/systemd-how-to-selectively-disable-wpa-supplicant-for-a-specific-wlan-interface.
- https://pifi.imti.co/.
- https://en.wikipedia.org/wiki/Captive_portal.
Check an implementation. I haven't tested this yet.
Idea 2: Configure via bluetooth
make pi a bluetooth device, connect your phone to it with an app the should display user interface and send the info to the device to get it configured.
is it possible that device will send a html page while the bt connections act as network connection? probably not a whole lot different from idea 1 if we do that.
I haven't tested this idea yet.
- https://hacks.mozilla.org/2017/02/headless-raspberry-pi-configuration-over-bluetooth/
- https://youtu.be/sEmjcgbmoRM
Idea 3: Configure via USB
Connect the device with a usb cable to a computer of phone, again the same concept a user interface shows up to configure.
- HEADLESS PI ZERO SSH ACCESS OVER USB (WINDOWS).
- Raspberry pi boot overlays.
- Go Go Gadget Pi Zero.
- RASPBERRY PI ZERO USB/ETHERNET GADGET TUTORIAL.
Note that, phone has a usb otg connector, and so is pi zero. Both will be in gadget mode. To connect to a phone we will need a special cable which is not desired but possible.
However, let's explore the idea of Pi as a device connected to PC or laptop host. Pi has usb otg, means that it can be either a host or it can be a device. We can connect them with a cable and setup Pi as a Ethernet gadget. Then the configuration webpage will be visible from PC browser. This seems to be most straight forward way since our Pi is already setup for USB networking.
Make sure that the power cable is removed from the Pi. Simply plug in the Pi USB OTG port to a PC or laptop. PC will power and recognize the
Pi device. At this point you can open a browser and browse to
http://raspberrypi.local
and the web page will be displayed.
However there is one problem in this case,
My desktop works fine but my laptop is treating pi as a com port as this article mentioned. I am manually trying to install ndis driver on my windows 10, but microsoft site was no help. Apparently a certificate is needed for the inf file they suggested. Gota research more to find where that certificate for their inf file is located.
Meanwhile this post, https://forum.moddevices.com/t/rndis-driver-for-windows-10/299/7
suggested a way to install a rndis
driver from moddevice.
The full documentation is here, read carefully before you install the driver,
I downloaded the zip file,
mod-duo-rndis.zip
,
from microsoft.net site, installed it and it worked.
I backed up the zip file here, just in case the above link ever stops working.
Raspberry pi as Access Point and Wifi client
This is an example of how the idea 1 can be implemented. This was collected from the tutorials found on internet https://www.raspberrypi.org/forums/viewtopic.php?t=211542.
It is based on IOT wifi's solution, but I wanted to use a language other than Go to manage my wifi connections, so all changes are within the standard Raspbian Stretch OS.
These steps are (as best as I can remember) in the order that I did them in:
1. Update system
Run apt-get update and upgrade to make sure you have the latest and greatest.
sudo apt-get update
sudo apt-get upgrade
This may take a while depending on connection speed.
2. Install hostapd and dnsmasq
Install the hostapd access point daemon and the dnsmasq dhcp service.
sudo apt-get install hostapd dnsmasq
3. Edit configuration files
Here we need to edit the config files for dhcpcd, hostapd, and dnsmasq so that they all play nice together. We do NOT, as in past implementations, make any edits to the /etc/network/interfaces
file. If you do it can cause problems, check tutorial notes here.
Edit /etc/dhcpcd.conf
interface uap0
static ip_address=192.168.50.1/24
nohook wpa_supplicant
This sets up a static IP address on the uap0 interface that we will set up in the startup script. The nohook line prevents the 10-wpa-supplicant hook from running wpa-supplicant on this interface.
Replace /etc/dnsmasq.conf
Move the dnsmasq original file to save a copy of the quite useful example, you may even want to use some of the RPi-specific lines at the end. I did not test my solution with those.
sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
Create a new /etc/dnsmasq.conf
and add the following to it:
interface=lo,uap0 # Use interfaces lo and uap0
bind-interfaces # Bind to the interfaces
server=8.8.8.8 # Forward DNS requests to Google DNS
domain-needed # Don't forward short names
bogus-priv # Never forward addresses in the non-routed address spaces
# Assign IP addresses between 192.168.70.50 and 192.168.70.150
# with a 12-hour lease time
dhcp-range=192.168.70.50,192.168.70.150,12h
# The above address range is totally arbitrary; use your own.
Create file /etc/hostapd/hostapd.conf
and add the following:
(Feel free to delete the commented out lines)
# Set the channel (frequency) of the host access point
channel=1
# Set the SSID broadcast by your access point (replace with your own, of course)
ssid=IOT-Config
# This sets the passphrase for your access point (again, use your own)
wpa_passphrase=passwordBetween8and64charactersLong
# This is the name of the WiFi interface we configured above
interface=uap0
# Use the 2.4GHz band
# (untested: ag mode to get 5GHz band)
hw_mode=g
# Accept all MAC addresses
macaddr_acl=0
# Use WPA authentication
auth_algs=1
# Require clients to know the network name
ignore_broadcast_ssid=0
# Use WPA2
wpa=2
# Use a pre-shared key
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP
driver=nl80211
# I commented out the lines below in my implementation, but I kept them here for reference.
# Enable WMM
#wmm_enabled=1
# Enable 40MHz channels with 20ns guard interval
#ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
Note: The channel written here MUST match the channel of the wifi that you connect to in client mode (via wpa-supplicant). If the channels for your AP and STA mode services do not match, then one or both of them will not run. This is because there is only one physical antenna. It cannot cover two channels at once.
Edit file /etc/default/hostapd
and uncomment DAEMON_CONF
line. Add the following,
DAEMON_CONF="/etc/hostapd/hostapd.conf"
4. Create startup script
Add a new file /usr/local/bin/wifistart
(or whatever name you like best), and add the following to it:
#!/bin/bash
# Redundant stops to make sure services are not running
echo "Stopping network services (if running)..."
systemctl stop hostapd.service
systemctl stop dnsmasq.service
systemctl stop dhcpcd.service
# Make sure no uap0 interface exists (this generates an error; we could probably use an if statement to check if it exists first)
echo "Removing uap0 interface..."
iw dev uap0 del
# Add uap0 interface (this is dependent on the wireless interface being called wlan0, which it may not be in Stretch)
echo "Adding uap0 interface..."
iw dev wlan0 interface add uap0 type __ap
# Modify iptables (these can probably be saved using iptables-persistent if desired)
echo "IPV4 forwarding: setting..."
sysctl net.ipv4.ip_forward=1
echo "Editing IP tables..."
iptables -t nat -A POSTROUTING -s 192.168.70.0/24 ! -d 192.168.70.0/24 -j MASQUERADE
# Bring up uap0 interface. Commented out line may be a possible alternative to using dhcpcd.conf to set up the IP address.
#ifconfig uap0 192.168.70.1 netmask 255.255.255.0 broadcast 192.168.70.255
ifconfig uap0 up
# Start hostapd. 10-second sleep avoids some race condition, apparently. It may not need to be that long. (?)
echo "Starting hostapd service..."
systemctl start hostapd.service
sleep 10
# Start dhcpcd. Again, a 5-second sleep
echo "Starting dhcpcd service..."
systemctl start dhcpcd.service
sleep 5
echo "Starting dnsmasq service..."
systemctl start dnsmasq.service
echo "wifistart DONE"
There are other and better ways of automating this startup process, which I adapted from IOT wifi's code here. This demonstrates the basic functionality in a simple script.
5. Edit rc.local system script
There are other ways of doing this, including creating a daemon that can be used by systemctl, which I would recommend doing if you want something that will restart if it fails. Adafruit has a simple write-up on that here. I used rc.local
for simplicity here.
Add the following to your /etc/rc.local
script above the exit 0 line. Note the spacing between /bin/bash
and /usr/local/bin/wifistart
.
/bin/bash /usr/local/bin/wifistart
6. Disable regular network services
The wifistart script handles starting up network services in a certain order and time frame. Disabling them here makes sure things are not run at system startup.
sudo systemctl stop hostapd
sudo systemctl stop dnsmasq
sudo systemctl stop dhcpcd
sudo systemctl disable hostapd
sudo systemctl disable dnsmasq
sudo systemctl disable dhcpcd
7. Reboot
sudo reboot
If you want to test the code directly and view the output, just run
sudo /usr/local/bin/wifistart
from the terminal after commenting out the wifistart
script line in rc.local
.
Preparing for distribution.
Back up the sdio image from dev SD card first.
Now we have to make sure the image has disabled ssh, and samba and any other services not needed on the deployed device by running some kind of shell script. Now the SD contains the production image and ready for distribution.
# list installed services
ls -la /etc/rc2.d/
# disable
sudo update-rc.d ssh disable
# enable
sudo update-rc.d ssh enable
Save the sdio image from dev SD card. This will be the boot image to be downloaded.
Alternatively, we can use a image build strategy to an optimized image with only necessary components, which will reduce image size.
If the image is too large, we can put minimal code on the SD, something similar to noobs
(New Out of Box Software), the boot image should be downloaded and prepared after initial boot by the user during configuration.
For application where production SD image is small, there will be no benefit using NOOBS strategy.
- Imaging sdio source code https://github.com/raspberrypi/rpi-imager
- https://github.com/raspberrypi/noobs
todo: check how to use docker container in pi
Wifi related links
- wpa_supplicant developers doc
- Changing Wifi networks
- Setting up a wifi
- Switching between known Wifi networks
- Which WiFi network I am connected to?
- https://www.tecmint.com/set-system-locales-in-linux/
- https://www.debian.org/doc/manuals/debian-reference/ch08.en.html
Some random setup stuff
timedatectl list-timezones
provides list of all timezones- https://github.com/eggert/tz all timezones
- raspi-config https://github.com/RPi-Distro/raspi-config
- list of 2 letter country codes https://www.iso.org/obp/ui/#search
- list of flags https://github.com/lipis/flag-icon-css
- language codes https://www.loc.gov/standards/iso639-2/php/code_list.php
- select from a list https://svelte.dev/tutorial/select-bindings
Samba WINS doesnt make is discoverable in windows 10
WSD is missing from samba. samba only supports netbios. This WSD server written in python will, make the device discoverable.
Daemon with shell script
- Making service daemon with shell script http://manpages.ubuntu.com/manpages/focal/en/man8/start-stop-daemon.8.html
- A shell Daemon template. This seems to have recusrion, we need to fix it if want to use it.
- Daemons.
- Using start-stop-daemon
Debugging python cgi scripts
Following will send the error message and traceback to the browser for debugging
import sys
import traceback
print ("Content-Type: text/html")
print
sys.stderr = sys.stdout
try:
...your code here...
except:
print ("\n\n<PRE>")
traceback.print_exc()
Remove the code after debugging is complete. Otherwise it may expose information leading into security risk for your application.
post
requests can not be redirected, browser turns it into aget
request and then the request fails.
Using cython
install cython first.
sudo apt-get update
# for python2
sudo apt-get install python-dev --fix-missing
sudo apt-get install python-pip
sudo apt-get install python-distutils
sudo pip install cython
# for python3
sudo apt-get install python3-dev --fix-missing
sudo apt-get install python3-pip
sudo apt-get install python3-distutils
sudo pip3 install cython
cython is really designed building modules to be used within python for speed up, packaging as standalone executable is tedious because of the dependency chain as all the dependencies have to be manually compiled.
Simple way, lets say our python version is 3.7m
compile test.pyx
to executable
cython --embed -3 test.pyx
gcc -c test.c `python3-config --cflags`
gcc -o test test.o `python3-config -ldflags`
check https://github.com/cython/cython/tree/master/Demos/embed
gcc to create binary python module that can be imported
cython -3 test.pyx
gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong \
-Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC \
-I/usr/include/python3.7m -c test.c -o test.o
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,relro -g \
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 \
test.o -o test.so
gcc options to create executable in one step picking up include options from pkg-config
cython --embed -3 test.pyx
gcc -v -Os -lpthread -lm -lutil -ldl -lpython3.7m \
`pkg-config --cflags python-3.7m` \
-o test test.c
Using distutils buld system
This creates a .so
library, a binaty python module
import distutils.core
import Cython.Build
distutils.core.setup(
ext_modules = Cython.Build.cythonize("test.pyx"))
todo: I have yet to figure out how I can generate executablie using cython build api.
- Boosting Python Scripts With Cython
- Creating an executable file using Cython
- Making an executable in Cython
- Protecting Python Sources With Cython
- http://okigiveup.net/an-introduction-to-cython/
- https://tryexceptpass.org/article/package-python-as-executable/
setting time zone from terminal
sudo timedatectl set-timezone Europe/Brussels
setup SSL using lets encrypt
Using Rust
See Using rust in Raspberry pi.
fastcgi
todo
wpa_cli in daemon mode
setup raspberry pi with live reload
back up pi
- Backup and recovery solution I use (recovery image)
- Script to backup a Raspberry Pi disk image
- Encrypted backup of linux (Raspbian) configuration data and Dropbox upload
C library for controlling GPIO
To update or install on a Raspbian-Lite system:
sudo apt-get install wiringpi
the author has stopped developing. the code is available in github,
an example of how to use the library
A great resource
rust GPIO for pi
- ???
- May be a kernel module with rust?? Some work is ongoing.
- RPPAL.
- https://github.com/rust-embedded/rust-sysfs-gpio.
Most promising seems to be RPPAL option.
I will try this option and do the write up on this.
python GPIO
rpikernelhack
apt-get
armv6 toolchain
Booting Raspbian
- GPU ROM firmware boots reads first FAT partition.
start.elf
from FAT partition is the bootlader which is loaded and executed by GPU.- Bootlader loads
config.txt
from the FAT partition to memory. - In
config.txt
file,kernel
setting provides the kernel andcommand line
provides command line script. - Bootloader loads kernel and command line to arm memory.
- Bootloader passes control to kernel.
- kernle mounts ext4 partition from Command line setting
root=PARTUUID=6c586e13-02
using the UUID. - Finally, the kernel looks for a file called
/init
specified in command line and executes it.
todo: got this from an article, need to verify.
go lang for pi 0
- WPA supplicant over D-Bus using go for raspberry pi.
- https://golang.org/doc/tutorial/getting-started
use go build with following to compile for pi
env GOOS=linux GOARCH=arm GOARM=5 go build
I tried with GOARM=6 and it worked too.
Some other stuff
- Pair a Raspberry Pi and Android phone https://bluedot.readthedocs.io/en/latest/pairpiandroid.html
todo: Will this work for pi zero w?
Cross Compiling for Pi
cross compile
go lang for pi 0
Using rust in Raspberry pi
- Cross Compile with WSL2 for the Raspberry Pi.
- Cross Compiling Rust for the Raspberry Pi.
- Prebuilt Windows Toolchain for Raspberry Pi. Question: who are these people? where are the sources fro these tools?
- Cross compiling Rust for ARM (e.g. Raspberry Pi) using any OS!
- “Zero setup” cross compilation and “cross testing” of Rust crates
- Vagrant, Virtual machine for cross development. I really like this setup, easy to use. Plays well with virtualbox.
- https://github.com/kunerd/clerk/wiki/How-to-use-HD44780-LCD-from-Rust#setting-up-the-cross-toolchain
- https://opensource.com/article/19/3/physical-computing-rust-raspberry-pi
- https://github.com/japaric/rust-cross
- RPi-GCC-cross-compiler
QEMU for library dependencies
- Debootstrap
- Introduction to qemu-debootstrap.
- https://headmelted.com/using-qemu-to-produce-debian-filesystems-for-multiple-architectures-280df41d28eb.
- Kernel Recipes 2015 - Speed up your kernel development cycle with QEMU - Stefan Hajnoczi.
- Debootstrap #1 Creating a Filesystem for Debian install Linux tutorial.
- Creating Ubuntu and Debian container base images, the old and simple way.
- Raspberry Pi Emulator for Windows 10 Full Setup Tutorial and Speed Optimization.
- RASPBERRY PI ON QEMU.
- Run Raspberry Pi Zero W image in qemu, github source.
- How to set up QEMU 3.0 on Ubuntu 18.04.
building qemu from raspberri pi zero in wsl2 ubuntu
It is best if you make a directory somewhere in windows for the sources. Using powershell to keep wsl2 VHD files small,
cd c:\Users\<user_name>\Documents
mkdir qemu-build
Start ubuntu wsl2 instance. Now using shell,
cd ~/<some_dir>
mkdir qemu-build
sudo mount --bind "/mnt/c/Users/<user_name>/Documents/qemu-build" qemu-build
cd qemu-build
This is where we will build qemu for raspberry pi zero.
Get qemu sources and dependencies,
git clone https://github.com/igwtech/qemu
# do followings only if you need to modify submodule sources
# git submodule init
# git submodule update --recursive
We are using a forked qemu source above because the official qemu repo doesn't provide support for raspberry pi zero. Feel free to diff the code with tags from original source, which will provide valuable insight to adding another arm processor support.
Activate source repositories by un-commenting the deb-src
lines in /etc/apt/sources.list
.
Get qemu dependencies,
sudo apt-get update
sudo apt-get build-dep qemu
Create a build directory
mkdir build
cd build
Configure qemu to build all qemu binaries,
../qemu/configure
Otherwise if you already have installed all the binaries, or only interested in qemu-arm
and qemu-system-arm
,
this configures to build just those,
../qemu/configure --target-list=arm-softmmu,arm-linux-user
To find all the configuration options, run configure --help
.
Build and install qemu,
make
sudo make install
Now we can remove the mount,
cd ../..
sudo umount qemu-build
You can remove the build directory qemu-build\build
if you like, or keep it for later development.
Run qemu for raspi0,
qemu-system-arm -machine raspi0 -serial stdio -dtb bcm2708-rpi-zero-w.dtb -kernel kernel.img -append 'printk.time=0 earlycon=pl011,0x20201000 console=ttyAMA0'
qemu-kvm has problems in wsl2, currently it does not work properly.
Raspbian apt sources
/etc/apt/sources.list
,
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
/etc/apt/sources.list.d/raspi.list
,
deb http://archive.raspberrypi.org/debian/ buster main
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src http://archive.raspberrypi.org/debian/ buster main
install the cross compiler
check https://github.com/Pro/raspi-toolchain to use their prebuilt toolchain in wsl2
# Download the toolchain:
wget https://github.com/Pro/raspi-toolchain/releases/latest/download/raspi-toolchain.tar.gz
# The toolchain has to be in /opt/cross-pi-gcc since it's not location independent.
sudo tar xfz raspi-toolchain.tar.gz --strip-components=1 -C /opt
raspbian filesystem
/etc/fstab
,
proc /proc proc defaults 0 0
PARTUUID=288695f5-01 /boot vfat defaults 0 2
PARTUUID=288695f5-02 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
pi-sd-2/etc/ld.so.preload
,
/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so
Installing library dependencies in a image
If there is a dependency on additional libraries, we should install those
in the raspberry pi SD. Then we can save an image of the SD using Win32DiskImage
in a .img
file. Now we can mount the image and copy the
necessary libraries to the toolchain sysroot we installed earlier.
$ fdisk -lu /mnt/d/pi_images/pi-sd.img
Disk /mnt/d/pi_images/pi-sd.img: 28.97 GiB, 31086084096 bytes, 60715008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x288695f5
Device Boot Start End Sectors Size Id Type
/mnt/d/pi_images/pi-sd.img1 8192 532479 524288 256M c W95 FAT32 (LBA)
/mnt/d/pi_images/pi-sd.img2 532480 60715007 60182528 28.7G 83 Linux
The fat32
partition is the first one. The offset is 8192*512=4194304 and
size is 524288*512=268435456 in bytes.
The ext4
partition is the second one. The offset is 512*532480=272629760 and size is 512*60182528=30813454336 in bytes.
Now you can mount them,
mkdir /home/pi/pi-sd-1
mkdir /home/pi/pi-sd-2
mount -o loop,offset=4194304,sizelimit=268435456 /mnt/d/pi-images/pi-sd.img /home/pi/pi-sd-1
mount -o loop,offset=272629760 /mnt/d/pi_images/pi-sd.img /home/pi/pi-sd-2
ls -la /home/pi/pi-sd-1
ls -la /home/pi/pi-sd-2
There is no need to specify size for the last partition. At this point we can edit the image to get it ready for emulation.
to cross compile copy all libraries
rsync -vR --progress -rl --delete-after --safe-links /home/pi/pi-sd-2/{lib,usr,etc/ld.so.conf.d,opt/vc/lib} $HOME/rpi/rootfs
(TODO)
Now you can copy appropriate libraries to
/opt/rpi_tools/arm-bcm2708/arm-linux-gnueabihf/arm-linux-gnueabihf/sysroot
.
qemu rpi kernel (TODO)
- https://github.com/dhruvvyas90/qemu-rpi-kernel this claims some adjustment on rpi kernel for qemu need to investigate what this adjustment is and is it relevant any more.
disk images
utilities
- Raspberry-Pi Utilities. A very nice place to learn how to chroot and get emulation going.
Rust in Raspberry Pi
Notes
- https://github.com/diwic/dbus-rs/blob/master/libdbus-sys/cross_compile.md
- https://github.com/diwic/dbus-rs/issues/184#issuecomment-520228758
Setup wsl2 for cross compile
First install cross compile toolchain from https://github.com/kkibria/raspi-toolchain in wsl2. The toolchain install will create a temporary download area, which will contain the Raspbian image file. Save the file image file or save the download area. We will need the image to install additional libraries if required.
Setup wsl2 for rust
Now install rust and setup rust,
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
. ~/.bashrc
rustc --version
rustup target add arm-unknown-linux-gnueabihf
We need to add our build target to ~/.cargo/config
by adding the following lines, so that rust knows which linker to use.
[build]
# Specifies that the default target is ARM.
target = "arm-unknown-linux-gnueabihf"
rustflags = ["-L", "/lib/arm-linux-gnueabihf"]
[target.arm-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
Now you have a working Rust cross-compilation toolchain set up.
rust projects requiring native library
If you are building the native library yourself then you know what you are doing. However, what if you are using a library that is available from raspbian apt repository?
This might be the case for your project, One such example project is building something that uses dbus. The same technique can be applied on other projects needing to use native libraries. So we will explore the dbus case.
Pi Dbus tooling for rust
We need to configure the DBus crate for cross compilation. The DBus crate uses a build script that uses pkg_config to locate the native dbus libraries.
cargo:rustc-link-search
, which is the library search path.
cargo:rust-link-lib
, which is the name of a library to link.
For, this we need to add Dbus libraries to ~/.cargo/config
.
[target.arm-unknown-linux-gnueabihf.dbus]
# Specifies the library search paths.
rustc-link-search = [
# we will have to add the path
# when we know where the libraries are installed.
]
# Specifies the names of the native libraries that are required to build DBus.
rustc-link-lib = [
"dbus-1",
"gcrypt",
"gpg-error",
"lz4",
"lzma",
"pcre",
"selinux",
"systemd",
]
Use libget to get the required libraries
when you installed the toolchain in wsl2, it also installed libget
. This automates everything
we need to do and install the libraries in ~/rootfs
.
We can add the libraries to our root file system from ~/rootfs using rsync
pushd ~/rootfs
rsync -vR --progress -rl --delete-after --safe-links {etc,lib,sbin,usr,var} $HOME/rpi/rootfs
popd
Use liblink to make links to library for rust
When you installed the toolchain in wsl2, it also installed liblink
. This automates everything
we need to do and install the links in ~/liblink
folder.
liblink dbus-1 gcrypt gpg-error lz4 pcre lzma pthread dl selinux systemd
Add the library search path for dbus libraries
~/liblink
we created is
not inside the rust project folder. Therefore, we can simply make
rust use this folder for linking. In such case, we will specify
the absolute path of ~/liblink
in ~/.cargo/config
search link section, rustc-link-search
.
~/.cargo/config
.
[target.arm-unknown-linux-gnueabihf.dbus]
# Specifies the library search paths.
rustc-link-search = [
# absolute path of ~/liblink
/home/username/liblink
]
Native library for Rust in Raspberry Pi
We will look into problems that we face in getting native library for cross compiling rust for pi here. This will clear your understanding of how the wsl cross compile tool helps us dealing with those. As we mentioned before dbus based project will be a good case study to shed light into rust cross compiling issues related to using native library in raspberry pi.
A simple dbus project for testing
To see how to set up the DBus crate for cross compilation, we will create a new project for it. Create an empty directory somewhere run the following,
Bash
# Initialize a new Rust project in the current folder.
cargo init
this produces following layout
.
├── Cargo.toml
├── build.rs
└── src
└── main.rs
The crate's build script is specified in Cargo.toml and is normally executed at every build.
Change the contents of main.rs to the following. This is just so that we actually use something from the DBus crate, otherwise there would be no reference to it in the final executable and nothing for the linker to do.
main.rs
use dbus::{BusType, Connection, Interface, Member};
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let conn = Connection::get_private(BusType::System)?;
let obj = conn.with_path("org.freedesktop.DBus", "/", 5000);
let msg = obj.method_call_with_args(
&Interface::from(&"org.freedesktop.DBus".to_owned()),
&Member::from(&"ListNames".to_owned()),
|_| (),
)?;
let names: Vec<String> = msg.get1().unwrap_or(Vec::new());
for name in names {
println!("{}", name);
}
Ok(())
}
Next, add the DBus crate as an dependency by editing Cargo.toml.
Cargo.toml
[dependencies]
dbus = "0.6"
If you try building the project at this point, you will an error message indicating linking failure. We have to find find and download the native arm libraries required by DBus.
Problem statement 1: Details on getting missing libraries
How do you even find out which of the native packages are required? If you take a look at this line in the DBus build script, you see that it is looking for "dbus-1", which means libdbus-1.
OK, now which version of libdbus-1 is required? If you have your target system at hand, you can connect to it and run apt show libdbus-1* on it, which should show something like this.
libdbus-1 Information
Package: libdbus-1-3
Version: 1.12.16-1
...
Depends: libc6 (>= 2.28), libsystemd0
...
If you do not have the target system at hand, there is still a way: If you are using the Raspbian release based on Debian buster, head to this link (this is a huge file!) and search for Package: libdbus-1 inside there. You should see the same information.
Now we know that we have to download libdbus1-3 version 1.12.16-1 and it depends on libc6 (which is provided by the cross compilation toolchain) and libsystemd0 (which is not and which we also have to download).
In total, you have to download the following packages (the .deb files). This list contains the versions for the Raspbian release based on Debian buster. They may have changed since, check the versions installed on your target system. Click each of the package names below and download the correct file.
libdbus-1-3 is at version 1.12.16-1
libgcrypt20 is at version 1.8.4-5
libgpg-error0 is at version 1.35-1
liblz4-1 is at version 1.8.3-1
liblzma5 is at version 5.2.4-1
libpcre3 is at version 2:8.39-12
libselinux1 is at version 2.8-1+b1
libsystemd0 is at version 241-5+rpi1
Next, you have to extract each of these downloaded .deb files separately into an empty folder.
Bash
dpkg-deb -x /path/to/package.deb /path/to/empty/folder
Solution: See Use libget to get the required libraries.
Problem statement 2: Links to library for rust
Enter the folder you have extracted the package into and take a look at the files. The folder structure can be ./lib/arm-linux-gnueabihf or even ./usr/lib/arm-linux-gnueabihf inside this folder. The relevant files are the .so files. Some libraries however have another number after the .so, for example library.so.3. In this case, you have to add a symlink to library.so because that's where the GCC linker will look for it. The symlink must be in the same directory as the file it points to. To create a symlink called library.so that points to library.so.3, you would use the following command.
ln -s library.so.3 library.so
in our case, for dbus, we will create links
pushd /lib/arm-linux-gnueabihf
ln -sf libdbus-1.so.3 libdbus-1.so
ln -sf libgcrypt.so.20 libgcrypt.so
ln -sf libgpg-error.so.0 libgpg-error.so
ln -sf liblz4.so.1 liblz4.so
ln -sf libpcre.so.3 libpcre.so
ln -sf liblzma.so.5 liblzma.so
ln -sf libpthread.so.0 libpthread.so
ln -sf libdl.so.2 libdl.so
ln -sf libselinux.so.1 libselinux.so
ln -sf libsystemd.so.0 libsystemd.so
popd
Then take all the contents of the folder you extracted the package into and move them into another folder called libraries, which you create at the root of your Rust project. This is the location we directed the GCC linker to look for the libraries.
Repeat the extraction, symlinking and moving for all the other libraries.
Finally, after all this is done, your libraries folder should look something like this (the version numbers may differ):
./lib/arm-linux-gnueabihf/libdbus-1.so
./lib/arm-linux-gnueabihf/libdbus-1.so.3
./lib/arm-linux-gnueabihf/libdbus-1.so.3.14.15
./lib/arm-linux-gnueabihf/libgcrypt.so
...
./usr/lib/arm-linux-gnueabihf/liblz4.so
./usr/lib/arm-linux-gnueabihf/liblz4.so.1
...
Solution: See Use liblink to make links to library for rust.
Problem statement 3: Specifying library search path
We have to provide the cargo:rustc-link-search
key to
make sure all the needed libraries are found.
There are two ways we can provide any key to Cargo:
- In a Cargo config file.
- In our own build script.
We need to know how Cargo handles relative paths in rustc-link-search: They are resolved relative to the location of the extracted crate, not relative to a project.
So the takeaway is, we would have to specify the absolute library search paths if we don't want a crate relative path. How do we provide project relative search path if thats what we need? In such case, we can use build script to convert project relative paths to absolute library search paths.
Following is an example when we have native libraries
inside a "libraries" folder within the project,
build.rs
use std::env::var;
fn main() {
// The manifest dir points to the root of the project containing this file.
let manifest_dir = var("CARGO_MANIFEST_DIR").unwrap();
// We tell Cargo that our native libraries are inside a "libraries" folder.
println!("cargo:rustc-link-search={}/libraries/lib/arm-linux-gnueabihf", manifest_dir);
println!("cargo:rustc-link-search={}/libraries/usr/lib/arm-linux-gnueabihf", manifest_dir);
}
Anyways, accessing dbus crate needed native libraries from a fixed location is what we need
in this case to keep thing simple and reusable.
This way we can simply use an absolute path in ~/.cargo/config
without depending
on the build script to provide search path.
Solution: See Add the library search path for dbus libraries.
Cross compile
Finally, you will be able to cross compile the test project without error messages.
Bash
cargo build
# Should print something like:
# Finished dev [unoptimized + debuginfo] target(s) in 0.57s
Using dbus in pi
Note on getting dbus interfaces
Lot of the followings have been actually updated, check https://github.com/kkibria/dbus-dev
the xml files were generated in raspberry pi
dbus-send --system --dest=fi.w1.wpa_supplicant1 \
--type=method_call --print-reply=literal /fi/w1/wpa_supplicant1 \
org.freedesktop.DBus.Introspectable.Introspect > wpa.xml
dbus-send --system --dest=org.freedesktop.timedate1 \
--type=method_call --print-reply=literal /org/freedesktop/timedate1 \
org.freedesktop.DBus.Introspectable.Introspect > timedate.xml
and copied in the project.
then on wsl we can use the xml files
dbus-codegen-rust -s -f org.freedesktop.timedate1 < timedate.xml > src/timedate.rs
dbus-codegen-rust -s -f fi.w1.wpa_supplicant1 < wpa.xml > src/wpa.rs
alternatively we can use dbus-codegen-rust on pi to generate rust files directly and copy to rust project
put it all together in get-pi-rs.sh
#assuming RPI is already exported in .bashrc
ssh $RPI 'bash -s' <<-"EOF"
export PATH=$HOME/.cargo/bin:$PATH
rm -rf temp-wsl
mkdir temp-wsl
pushd temp-wsl
dbus-codegen-rust -s -d org.freedesktop.timedate1 -p "/org/freedesktop/timedate1" -f org.freedesktop.timedate1 > timedate.rs
dbus-codegen-rust -s -d fi.w1.wpa_supplicant1 -p "/fi/w1/wpa_supplicant1" -f fi.w1.wpa_supplicant1 > wpa.rs
popd
EOF
rcp $RPI:temp-wsl/*.rs src
Python example for dbus
Setting up dbus for interacting with WPA_SUPPLICANT
I was trying to write some code that is to use dbus api to access wpa_supplicant. My understanding from reading various posts that wpa_supplicant must be started with -u
flag to fully expose it's apis to dbus. So I edited, /lib/dhcpcd/dhcpcd-hooks/10-wpa_supplicant
to by adding the -u
flag to the invocation of the wpa_supplicant
daemon in the wpa_supplicant_start()
.
At this point I couldn't use wpa_cli to connect to wlan0
anymore. I checked the processes with ps
and got,
pi@raspi:~ $ ps -aux | grep wpa_sup
root 306 0.0 1.0 10724 4732 ? Ss 21:21 0:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
So, I edited /lib/dhcpcd/dhcpcd-hooks/10-wpa_supplicant
again to remove -u
flag, rebooted etc. and again checked the processes. This time I got,
pi@raspi:~ $ ps -aux | grep wpa_sup
root 260 0.3 1.0 10724 4640 ? Ss 21:25 0:00 /sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
root 350 0.1 0.9 10988 4052 ? Ss 21:25 0:00 wpa_supplicant -B -c/etc/wpa_supplicant/wpa_supplicant.conf -iwlan0 -Dnl80211,wext
and now I can use wpa_cli to connect to wlan0
.
This is confusing to me. I am not sure why.
After some digging, it appears that wpa_supplicant.service
should be disabled as it was preventing wpa_cli
from connecting to wlan0
.
After doing,
sudo systemctl disable wpa_supplicant
sudo reboot
I was able to connect.
I am still not sure why.
this has an explanation. https://github.com/mark2b/wpa-connect
cross compile dbus
- https://github.com/diwic/dbus-rs/blob/master/libdbus-sys/cross_compile.md
- https://serverfault.com/questions/892465/starting-systemd-services-sharing-a-session-d-bus-on-headless-system headless dbus.
- https://raspberrypi.stackexchange.com/questions/114739/how-to-install-pi-libraries-to-cross-compile-for-pi-zero-in-wsl2.
- https://airtower.wordpress.com/2010/07/20/using-gvariant-tuples/
- https://fosdem.org/2020/schedule/event/rust_dbus_library/
Working with dbus
How do I get properties using dbus
I have listed the properties that I am interested in using timedatectl
which uses systemd
dbus,
$ timedatectl
Local time: Tue 2020-07-28 19:37:00 PDT
Universal time: Wed 2020-07-29 02:37:00 UTC
RTC time: n/a
Time zone: America/Los_Angeles (PDT, -0700)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Next, I checked timedatectl.c
in systemd
source code to get bus endpoint and method using which I went ahead and introspected,
$ dbus-send --system --dest=org.freedesktop.timedate1 --type=method_call --print-reply /org/freedesktop/timedate1 org.freedesktop.DBus.Introspectable.Introspect
method return time=1595997538.869702 sender=:1.30 -> destination=:1.29 serial=3 reply_serial=2
string "<!DOCTYPE node PUBLIC "-//freedesktop//DTD D-BUS Object Introspection 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/introspect.dtd">
<node>
...
<interface name="org.freedesktop.DBus.Properties">
...
<method name="GetAll">
<arg name="interface" direction="in" type="s"/>
<arg name="properties" direction="out" type="a{sv}"/>
</method>
...
</interface>
<interface name="org.freedesktop.timedate1">
<property name="Timezone" type="s" access="read">
</property>
<property name="LocalRTC" type="b" access="read">
</property>
...
</interface>
</node>
"
Next I tried to use the method GetAll
,
$ dbus-send --system --dest=org.freedesktop.timedate1 --type=method_call --print-reply /org/freedesktop/timedate1 org.freedesktop.DBus.Properties.GetAll string:org.freedesktop.timedate1
method return time=1595997688.111555 sender=:1.33 -> destination=:1.32 serial=4 reply_serial=2
array [
dict entry(
string "Timezone"
variant string "America/Los_Angeles"
)
dict entry(
string "LocalRTC"
variant boolean false
)
dict entry(
string "CanNTP"
variant boolean true
)
dict entry(
string "NTP"
variant boolean true
)
dict entry(
string "NTPSynchronized"
variant boolean true
)
dict entry(
string "TimeUSec"
variant uint64 1595997688110070
)
dict entry(
string "RTCTimeUSec"
variant uint64 0
)
]
and we get our desired result same as timedatectl
.
Building Embedded System
There are two build systems that can be used to build images embedded systems using linux.
Tool to help the build
Raspberry pi
The official raspbian o/s is built with Buildroot, but there is also Yocto based builds available,
- Building 32-bit Raspberry Pi Systems with Yocto.
- Building 64-bit Systems for Raspberry Pi 4 with Yocto
- Building embedded GNU/Linux distribution for Raspberry Pi using the Yocto Project.
If you want to switch init system, you can check existing init system by using sudo stat /sbin/init
.
Cross compile raspbian
Yocto with wsl2
- WSL2 file permission issues cause Buildroot and Yocto build failures.
- WSL2 ram usage problem workaround.
- WSL2 ram usage.
- WSL2 ram usage.
- WSL2 global options.
- Yocto mega manual
wsl2 for cross compile
the default mounts for wsl2 ubuntu ext4 is C:\Users\<username>\AppData\Local\Packages\Canonical*\LocalState\ext4.vhdx
.
Often time we require multiple instances to work on separate ext4 mount on a different drive for disk space or other reasons.
First check the distro name.
C:\> wsl -l -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
The distro name is Ubuntu-20.04
.
Now we need to set the default user via editing /etc/wsl.conf
.
[user]
default=<username>
Now we will create another image which is a copy of the Ubuntu-20.04
.
PS C:\> wsl --shutdown
PS C:\> wsl -l -v
NAME STATE VERSION
* Ubuntu-20.04 Stopped 2
PS C:\> cd d:\
PS D:\> mkdir my_wsl
PS D:\> cd my_wsl
PS D:\my_wsl> wsl --export Ubuntu-20.04 my_distro.tar
PS D:\my_wsl> wsl --import my_distro my_distro my_distro.tar
PS D:\my_wsl> wsl -l -v
NAME STATE VERSION
* Ubuntu-20.04 Stopped 2
my_distro Stopped 2
Now we have distro on D drive.
We can create a VHDX file using windows 10 Computer Management tool. The we can detach it. We have to figure out a way to initialize the file as ext4 and mount to a wsl2 linux distro.
Manually download wsl2 distro
Use download page to get distro. It downloads a .appx file which can be opened by 7zip and extract
install.tar.gz
.
now we can use wsl command to install it,
wsl --import my_distro my_distro install.tar.gz
yocto devtool
- Yocto Project® devtool Overviewand Hands-On, slides.
- Using Devtool to Streamline Your Yocto Project Workflow - Tim Orling, Intel.
- Yocto Project Extensible SDK: Simplifying the Workflow for Application Developers.
- Working with the Linux Kernel in the Yocto Project.
yocto tutorials
Ubuntu
Mount a USB stick
Insert the USB stick. First get the device path of the usb stick by running lsblk
.
It will look something like, assuming that it has only one partition.
sdb 8:16 1 14.9G 0 disk
└─sdb1 8:17 1 1.6G 0 part
Get the filesystem type of the partition by running blkid
,
sudo blkid /dev/sdb1
/dev/sdb1: UUID="...." TYPE="fat"
Now assuming that path /media
exists, mount the partition.
sudo mount -t <type> /dev/sdb1 /media
it will mount the usb drive it to path /media
.
Note: Default type is
fat
.
When done with the USB stick, unmount,
sudo umount /dev/sdb1
and remove the stick.
Debugging kernel or system program crash
- Beginning Kernel Crash Debugging on Ubuntu 18.10.
- Kernel panic - not syncing: Attempted to kill init!.
- Regarding Annoying crash report- How To Fix System Program Problem Detected In Ubuntu.
File system check
The simplest way to force fsck
filesystem check on a root partition
eg. /dev/sda1
is to create an empty file called forcefsck in the
partition's root directory.
sudo touch /forcefsck
This empty file will temporarily override any other settings and force
fsck
to check the filesystem on the next system reboot. Once the
filesystem is checked the forcefsck
file will be removed thus next time
you reboot your filesystem will NOT be checked again. Once boot completes,
the result of fsck
will be available in /var/log/boot.log
. Also
the ram filesystem used during boot will log it in
/run/initramfs/fsck.log
. This file will be lost as soon as the system
is shut down since the ram filesystem is volatile.
Security
Setting up ubuntu server with lubuntu desktop in a VirtualBox VM
Set up the VM with,
- 4G memeory.
- 32G vdi disk.
- Network: NAT / Host only
- Clipboard: bidirectional.
Setup linux
Install ubuntu server from server.iso using a USB drive.
Now setup the desktop,
sudo apt-get update
# Install lubuntu desktop
sudo apt-get install lubuntu-desktop
# get guest addition
sudo apt-get install virtualbox-guest-x11
Now go to Start > Preferences > Monitor settings
and select a resolution of your choice.
Custom Resolution
First we need to find out what display outputs are available.
$ xrandr -q
Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200
Virtual1 connected 1600x1200+0+0 0mm x 0mm
1600x1200 0.0*
1280x1024 0.0
640x480 0.0
...
This means Virtual1
is the first output device, there might be more listed. Find which output you want the monitor to connect to.
Lets say we want a monitor resolution of 960 x 600 @ 60Hz.
# get a Modeline
gtf 960 600 60
Lets say output will look like:
# 960x600 @ 60.00 Hz (GTF) hsync: 37.32 kHz; pclk: 45.98 MHz
Modeline "960x600_60.00" 45.98 960 1000 1096 1232 600 601 604 622 -HSync +Vsync
The string 960x600_60.00
is just an identifier proposed. For the following you can substitute it to anything more meaningful.
Now we will use this Modeline content to set our configuration,
# define a mode
xrandr --newmode "960x600_60.00" 45.98 960 1000 1096 1232 600 601 604 622 -HSync +Vsync
# map this mode to a output
xrandr --addmode Virtual1 "960x600_60.00"
At this point you can switch to the new resolution by
going to Start > Preferences > Monitor settings
and Selecting the resolution added.
Alternatively you can switch mode for the output from the terminal,
xrandr --output Virtual1 --mode "960x600_60.00"
The whole thing can be turned into a bash script,
#!/bin/bash
# get the modeline for the following resolution
RESOLUTION="960 600 60"
# extract modeline settings
SETTINGS=$( gtf $RESOLUTION | grep Modeline | cut -d ' ' -f4-16 )
# define the mode
xrandr --newmode $SETTINGS
# get name of mode from settings
MODE=$( echo $SETTINGS | cut -d ' ' -f1 )
# get the first connected output device
DEVICE=$( xrandr -q | grep "connected" | head -1 | cut -d ' ' -f1 )
# map this mode to the device
xrandr --addmode $DEVICE $MODE
# switch to the new mode
xrandr --output $DEVICE --mode $MODE
Changing the cursor size
To change the size of your mouse cursor,
open the desktop configuration file ~/.config/lxsession/lubuntu/desktop.conf
,
find the key iGtk/CursorThemeSize
and update the value to the desired size.
Converting VirtualBox VDI (or VMDK) to a ISO
- Inspired by the article, Converting a virtual disk image: VDI or VMDK to an ISO you can distribute.
- TKLPatch - a simple appliance customization mechanism. Source in github.
- All about VDIs
create raw image
VBoxManage clonemedium turnkey-core.vdi turnkey-core.raw --format RAW
Next, mount the raw disk as a loopback device.
mkdir turnkey-core.mount
mount -o loop turnkey-core.raw turnkey-core.mount
GOTCHA 1: If your VM has partitions, it's a little tricker. You'll need to setup the loop device, partition mappings and finally mount the rootfs partition. You will need kpartx to setup the mappings.
loopdev=$(losetup -s -f turnkey-core.raw)
apt-get install kpartx
kpartx -a $loopdev
# p1 refers to the first partition (rootfs)
mkdir turnkey-core.mount
mount /dev/mapper/$(basename $loopdev)p1 turnkey-core.mount
Extract root filesystem and tweak for ISO configuration Now, make a copy of the root filesystem and unmount the loopback.
mkdir turnkey-core.rootfs
rsync -a -t -r -S -I turnkey-core.mount/ turnkey-core.rootfs
umount -d turnkey-core.mount
# If your VM had partitions (GOTCHA 1):
kpartx -d $loopdev
losetup -d $loopdev
Because the VM is an installed system as opposed to the ISO, the file system table needs to be updated.
cat>turnkey-core.rootfs/etc/fstab<<EOF
aufs / aufs rw 0 0
tmpfs /tmp tmpfs nosuid,nodev 0 0
EOF
GOTCHA 2: If your VM uses a kernel optimized for virtualization (like the one included in the TurnKey VM builds), you need to replace it with a generic kernel, and also remove vmware-tools if installed. You can remove any other unneeded packages.
tklpatch-chroot turnkey-core.rootfs
# inside the chroot
apt-get update
apt-get install linux-image-generic
dpkg --purge $(dpkg-query --showformat='${Package}\n' -W 'vmware-tools*')
dpkg --purge $(dpkg-query --showformat='${Package}\n' -W '*-virtual')
exit
Generate the ISO Finally, prepare the cdroot and generate the ISO.
tklpatch-prepare-cdroot turnkey-core.rootfs/
tklpatch-geniso turnkey-core.cdroot/
this will create my_system.iso
Thats it!
burn it to usb
You can use dd
.
usb partition looks like /dev/sd<?><?>
where <?><?>
is a letter followed by a number.
Look usb disk up first by running lsblk
.
It will look something like,
sdb 8:16 1 14.9G 0 disk
├─sdb1 8:17 1 1.6G 0 part /media/username/usb volume name
└─sdb2 8:18 1 2.4M 0 part
Now you can unmount the usb as following,
sudo umount /dev/sdb1
Then, next (this is a destructive command and wipes the entire USB drive with the contents of the iso, so be careful):
sudo dd bs=4M if=path/to/my_system.iso of=/dev/sdb1 conv=fdatasync status=progress
Where my_system.iso
is the input file, and /dev/sdb1
is the USB device you're writing to.
Reset password
SSH keys
Artificial Intelligence
- GANs from Scratch 1: A deep introduction. With code in PyTorch and TensorFlow
- Interactive Video Stylization Using Few-Shot Patch-Based Training
- Use Pytorch Lightning with Weights & Biases
- Image-to-Image Translation with Conditional Adversarial Nets
courses
stable diffusion
- Original SD paper -- High-Resolution Image Synthesis with Latent Diffusion Models
- ControlNet paper -- Adding Conditional Control to Text-to-Image Diffusion Models
- SDXL paper -- SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
datasets
- https://github.com/mlabonne/llm-datasets
Developing 3D models
3d modelling
3d math
Cloth Simulation
- http://andrew.wang-hoyer.com/experiments/cloth/
- https://github.com/ndrwhr/cloth-simulation
- http://web.archive.org/web/20070610223835/http://www.teknikus.dk/tj/gdc2001.htm
- https://en.wikipedia.org/wiki/Methods_of_computing_square_roots
- https://github.com/ThomasLengeling/traerphysics
- https://github.com/bitcraftlab/traer-physics
- http://jrc313.com/projects/processing/cloth/index.html
- https://www.processing.org/
- https://github.com/schteppe/p2.js/
- https://youtu.be/gqdXsw_Q_as playlist https://www.youtube.com/playlist?list=PL9_jI1bdZmz2emSh0UQ5iOdT2xRHFHL7E
- http://web.archive.org/web/20070609220647/http://www.cs.cmu.edu/~baraff/sigcourse/index.html or https://www.cs.cmu.edu/~baraff/sigcourse/
processing
source code for cloth sim,
import traer.physics.*;
ParticleSystem physics;
Particle[][] particles;
Particle[] anchors;
Particle anchor;
int screenWidth = 400;
int screenHeight = 400;
int gridSize = 10;
float springStrength = 1.0;
float springDamping = 0.1;
float particleMass = 0.1;
float physicsStep = 0.2;
float bounce = 0.8;
float xBoundsMin = 10.0;
float xBoundsMax = screenWidth - 10.0;
float yBoundsMin = 10.0;
float yBoundsMax = screenHeight - 10.0;
void setup()
{
size(screenWidth, screenHeight);
smooth();
fill(0);
framerate(60);
physics = new ParticleSystem(0.2, 0.005);
particles = new Particle[gridSize][gridSize];
float gridStepX = (float) ((width / 2) / gridSize);
float gridStepY = (float) ((height / 2) / gridSize);
for (int i = 0; i < gridSize; i++)
{
for (int j = 0; j < gridSize; j++)
{
particles[i][j] = physics.makeParticle(0.1, j * gridStepX + (width / 4), i * gridStepY + 20, 0.0);
if (j > 0)
{
Particle p1 = particles[i][j - 1];
Particle p2 = particles[i][j];
physics.makeSpring(p1, p2, springStrength, springDamping, gridStepY);
}
if (i > 0)
{
Particle p1 = particles[i - 1][j];
Particle p2 = particles[i][j];
physics.makeSpring(p1, p2, springStrength, springDamping, gridStepY);
}
}
}
particles[0][0].makeFixed();
particles[0][gridSize - 1].makeFixed();
anchors = new Particle[4];
anchors[0] = particles[0][0];
anchors[1] = particles[0][gridSize - 1];
anchors[2] = particles[gridSize - 1][0];
anchors[3] = particles[gridSize - 1][gridSize - 1];
}
void draw()
{
physics.advanceTime(physicsStep);
if (mousePressed)
{
anchor.moveTo(mouseX, mouseY, 0);
anchor.velocity().clear();
}
background(255);
for (int i = 0; i < gridSize; i++)
{
for (int j = 0; j < gridSize; j++)
{
Particle p = particles[i][j];
float px = p.position().x();
float py = p.position().y();
float vx = p.velocity().x();
float vy = p.velocity().y();
if (px < xBoundsMin)
{
vx *= -bounce;
p.moveTo(xBoundsMin, py, 0);
p.setVelocity(vx, vy, 0);
}
else if (px > xBoundsMax)
{
vx *= -bounce;
p.moveTo(xBoundsMax, py, 0);
p.setVelocity(vx, vy, 0);
}
if (py < yBoundsMin)
{
vy *= -bounce;
p.moveTo(px, yBoundsMin, 0);
p.setVelocity(vx, vy, 0);
}
else if (py > yBoundsMax)
{
vy *= -bounce;
p.moveTo(px, yBoundsMax, 0);
p.setVelocity(vx, vy, 0);
}
}
}
for (int i = 0; i < gridSize; i++)
{
beginShape( LINE_STRIP );
curveVertex(particles[i][0].position().x(), particles[i][0].position().y());
for (int j = 0; j < gridSize; j++)
{
curveVertex(particles[i][j].position().x(), particles[i][j].position().y());
}
curveVertex(particles[i][gridSize - 1].position().x(), particles[i][gridSize - 1].position().y());
endShape();
}
for (int j = 0; j < gridSize; j++)
{
beginShape( LINE_STRIP );
curveVertex(particles[0][j].position().x(), particles[0][j].position().y());
for (int i = 0; i < gridSize; i++)
{
curveVertex(particles[i][j].position().x(), particles[i][j].position().y());
}
curveVertex(particles[gridSize - 1][j].position().x(), particles[gridSize - 1][j].position().y());
endShape();
}
}
void mousePressed()
{
int mx = mouseX;
int my = mouseY;
float d = -1.0;
for (int i = 0; i < gridSize; i++)
{
for (int j = 0; j < gridSize; j++)
{
float dTemp = distance(mx, my, particles[i][j].position().x(), particles[i][j].position().y());
if (dTemp < d || d < 0)
{
d = dTemp;
anchor = particles[i][j];
}
}
}
}
void mouseReleased()
{
if (keyPressed)
{
if (key == ' ')
{
anchor.makeFixed();
}
}
else
{
anchor.makeFree();
}
anchor = null;
}
float distance(float x1, float y1, float x2, float y2)
{
float dx = x2 - x1;
float dy = y2 - y1;
return sqrt((dx * dx) + (dy * dy));
}
Audio Processing
DSP
- The Scientist and Engineer's Guide to Digital Signal Processing
- Faust (Functional Audio Stream) programming language
vst3 with MINGW
- https://forum.juce.com/t/is-vst3-sdk-compatible-with-mingw/12385
- http://kxstudio.sourceforge.net/Paste/repo/GMOVw
- https://github.com/steinbergmedia/vst3sdk/issues/8
- http://www.martin-finke.de/blog/tags/making_audio_plugins.html
- Generating a VST Plugin via Faust.
- MAX.
- https://juce.com/
VST framework
open source synths
reason livestreams
physical synthesis libraries/ papers
- https://github.com/thestk/stk Perry R. Cook and Gary P. Scavone.
- https://ccrma.stanford.edu/~jos/jnmr/jnmr.pdf
- https://ccrma.stanford.edu/~jos/wgj/wgj.pdf
- https://www.osar.fr/notes/waveguides/
- https://ccrma.stanford.edu/~jos/wg.html
- https://ccrma.stanford.edu/~jos/pasp/pasp.html
- https://github.com/mi-creative/mi-gen
- https://github.com/mi-creative/miPhysics_Processing
- Martin Shuppius - Physical modelling of guitar strings (ADC'17)
- Some pdf papers
audio/midi interface libraries
- https://wiki.linuxaudio.org/wiki/programming_libraries
- http://www.portaudio.com/. Audacity uses this.
- https://github.com/AuLib/AuLib
DAW golang in audio
headless DAW
HLS audio streaming
ffmpeg -i song1.mp3 -map 0 -map 0 -c:a aac -b:a:0 320k -b:a:1 128k -var_stream_map "a:0,name:320k a:1,name:128k" -master_pl_name song1_manifest.m3u8 -f hls -hls_flags single_file -hls_playlist_type vod -hls_segment_filename "song1_%v/classroom.ts" song1_%v/index.m3u8
speaker frequency response
Rust Audio plugins
Understanding Soundfont Synthesis
I took RustySynth as an example from github https://github.com/sinshu/rustysynth, which is written in rust
.
Below is an overview of how the various pieces of RustySynth fit together—both in terms of “who depends on whom” (module/dependency structure) and in terms of the main runtime call‐hierarchy (what calls what at runtime).
1. High-Level Cargo/Workspace Layout
At the top level, the repository has three main sub-directories (workspace members):
-
rustysynth/
The core synthesizer library (this is where all of the SoundFont parsing, voice management, sample generation, and effects live). -
rustysynth_test/
A small crate for running unit/integration tests against the core library. -
example/
One or more example programs (showing how to load a SoundFont, create aSynthesizer
, play a MIDI file, etc.).
For our purposes, everything we care about is in rustysynth/src/
. Below is a
rough sketch of the files you’ll find there (names taken from docs.rs and a
GitHub directory listing) :
rustysynth/
├─ Cargo.toml
└─ src/
├─ error.rs
├─ midi_file.rs
├─ midi_file_sequencer.rs
├─ sound_font.rs
├─ synthesizer_settings.rs
├─ synthesizer.rs
├─ voice.rs
├─ oscillator.rs
├─ envelope.rs
├─ comb_filter.rs
├─ all_pass_filter.rs
├─ reverb.rs
├─ chorus.rs
└─ … (possibly a few small helpers, but these are the main pieces)
Each of the .rs
files corresponds to a mod
in lib.rs
, and together they
form the library’s public/exported API (plus internal helpers).
2. Module/Dependency Structure
Below is a simplified “dependency graph” of modules—i.e. which mod
files refer
to which other modules. Arrows (→) mean “depends on / uses functionality from.”
┌─────────────────┐
│ error.rs │ ←── defines `MidiFileError`, `SoundFontError`, `SynthesizerError`
└─────────────────┘
▲
│
┌─────────────────┐
│ sound_font.rs │ ←── uses `error::SoundFontError`, plus low‐level I/O traits (`Read`, `Seek`)
└─────────────────┘
▲
│
┌────────────────────────────┐
│ midi_file.rs │ ←── uses `error::MidiFileError`
└────────────────────────────┘
▲
│
┌──────────────────────────────────────┐
│ midi_file_sequencer.rs │ ←── uses `MidiFile` (from midi_file.rs)
│ | and `Synthesizer` (from synthesizer.rs)
└──────────────────────────────────────┘
▲
│
┌──────────────────────────────┐
│ synthesizer_settings.rs │ ←── trivial: only holds numeric fields (sample_rate, block_size, max_polyphony)
└──────────────────────────────┘
▲
│
┌──────────────────────────────┐
│ synthesizer.rs │ ←── uses:
│ │ • `SynthesizerSettings`
│ │ • `SoundFont`
│ │ • `Voice` (from voice.rs)
│ │ • DSP buffers (Vec<f32>)
│ │ • Effects (`Reverb`, `Chorus`)
└──────────────────────────────┘
▲
│
┌──────────────────────────────┐
│ voice.rs │ ←── uses:
│ │ • `Oscillator` (from oscillator.rs)
│ │ • `Envelope` (from envelope.rs)
│ │ • `CombFilter`, `AllPassFilter` (from their own modules)
└──────────────────────────────┘
▲
│
┌──────────────────────────────┐
│ oscillator.rs │ (no dependencies except `std::f32::consts::PI`)
└──────────────────────────────┘
┌──────────────────────────────┐
│ envelope.rs │ (no dependencies beyond basic math)
└──────────────────────────────┘
┌──────────────────────────────┐
│ comb_filter.rs │ (no external dependencies—just a buffer and feedback logic)
└──────────────────────────────┘
┌──────────────────────────────┐
│ all_pass_filter.rs │ (stateless/all-pass filter logic only)
└──────────────────────────────┘
┌──────────────────────────────┐
│ reverb.rs │ ←── uses `CombFilter` and `AllPassFilter`
└──────────────────────────────┘
┌──────────────────────────────┐
│ chorus.rs │ (similar: uses one or more LFOs + delay lines; no other cross‐deps)
└──────────────────────────────┘
-
error.rs
defines the crate’s error types (MidiFileError
,SoundFontError
,SynthesizerError
). Other modules simply import these viapub use error::…;
. -
sound_font.rs
is responsible for parsing an SF2 file (SoundFont::new(...)
) and exposing types likePreset
,SampleHeader
, etc. It only depends on I/O traits (Read
,Seek
) anderror::SoundFontError
. -
midi_file.rs
parses a standard MIDI file and exposesMidiFile
and its associated data structures (tracks, events). It depends onerror::MidiFileError
. -
midi_file_sequencer.rs
drives playback of aMidiFile
through aSynthesizer
. Internally, it calls methods onSynthesizer
(e.g.note_on
,note_off
,render
). -
synthesizer_settings.rs
is trivial—just a small struct holdingsample_rate
,block_size
,maximum_polyphony
. -
synthesizer.rs
is the heart of the real-time (or block‐based) engine. It:-
Holds an
Arc<SoundFont>
(so multiple threads can share the same SoundFont safely). -
Keeps a
Vec<Voice>
(one slot per possible voice). -
Keeps per-channel state (
channels: Vec<ChannelState>
). -
Manages effect units (
Reverb
,Chorus
). -
Exposes methods like
fn new(sound_font: &Arc<SoundFont>, settings: &SynthesizerSettings) -> Result<Self, SynthesizerError>
fn note_on(&mut self, channel: u8, key: u8, velocity: u8)
fn note_off(&mut self, channel: u8, key: u8, velocity: u8)
fn render(&mut self, left: &mut [f32], right: &mut [f32])
-
-
voice.rs
represents a single active note (“voice”). Each voice holds:- An
Oscillator
(for waveform generation). - An
Envelope
(for ADSR or similar amplitude shaping). - A small bank of
CombFilter
s andAllPassFilter
s (for per-voice filtering). - A reference to the
SampleHeader
(so it knows which PCM data to read). - Methods like
fn new(…)
to create a voice from a givenInstrumentRegion
orPresetRegion
, andfn process_block(&mut self, left: &mut [f32], right: &mut [f32])
to generate its output into the provided audio block.
- An
-
oscillator.rs
implements low-level math for, e.g., generating a sine wave, a square wave, or reading a PCM sample from memory. It does not depend on any other module exceptstd::f32::consts
. -
envelope.rs
implements standard envelope generators (ADSR). No cross-deps. -
comb_filter.rs
andall_pass_filter.rs
implement the two basic filter types used both in each voice (for “filter per voice”) and inside the main reverb unit (for “global reverb”). -
reverb.rs
builds onCombFilter
+AllPassFilter
to implement a stereo reverb effect. -
chorus.rs
implements a stereo chorus effect (no further dependencies). -
Finally,
lib.rs
has lines like:#![allow(unused)] fn main() { mod error; mod sound_font; mod midi_file; mod midi_file_sequencer; mod synthesizer_settings; mod synthesizer; mod voice; mod oscillator; mod envelope; mod comb_filter; mod all_pass_filter; mod reverb; mod chorus; pub use error::{MidiFileError, SoundFontError, SynthesizerError}; pub use sound_font::SoundFont; pub use midi_file::{MidiFile, MidiEvent}; pub use midi_file_sequencer::MidiFileSequencer; pub use synthesizer_settings::SynthesizerSettings; pub use synthesizer::Synthesizer; }
so that downstream users can write:
#![allow(unused)] fn main() { use rustysynth::SoundFont; use rustysynth::Synthesizer; use rustysynth::MidiFile; use rustysynth::MidiFileSequencer; }
without needing to know the internal module structure.
3. Runtime Call-Hierarchy (What Happens When You Synthesize)
Below is the typical sequence of calls, from loading a SoundFont to generating audio. You can think of this as a “dynamic call graph” that shows how, at runtime, each component invokes the next.
(1) User code:
let mut sf2_file = File::open("SomeSoundFont.sf2")?;
let sound_font = Arc::new(SoundFont::new(&mut sf2_file)?);
let settings = SynthesizerSettings::new(44100);
let mut synth = Synthesizer::new(&sound_font, &settings)?;
└──▶ SoundFont::new(...) parses the SF2 file, building:
• a list of Presets
• for each Preset, a Vec<PresetRegion>
• each PresetRegion refers to one or more InstrumentRegion
• each InstrumentRegion points to SampleHeader and bank parameters
• (Internally, sound_font.rs may also build a “preset_lookup” table, etc.)
(2) User code (optional):
// If you want to play a standalone MIDI file:
let mut midi_file = File::open("SomeSong.mid")?;
let midi = Arc::new(MidiFile::new(&mut midi_file)?);
let mut sequencer = MidiFileSequencer::new(synth);
└──▶ MidiFile::new(...) parses all tracks, tempo maps, events (note on/off, CC, etc.)
└──▶ MidiFileSequencer::new(...) stores the `Synthesizer` inside itself (by move or by value).
(3) User code:
// In a real‐time context, you might spawn an audio thread that repeatedly does:
// loop {
// sequencer.render(&mut left_buf, &mut right_buf);
// send to audio output device
// }
// In an offline context:
sequencer.play(&midi, /* loop = false */);
let total_samples = (settings.sample_rate as f64 * midi.get_length()) as usize;
let mut left = vec![0.0_f32; total_samples];
let mut right = vec![0.0_f32; total_samples];
sequencer.render(&mut left[..], &mut right[..]);
└──▶ MidiFileSequencer::play(...) // sets an internal “start_time = 0” or similar
└──▶ MidiFileSequencer::render(left, right):
├── updates internal “current_timestamp” based on block size or realtime clock
├── calls `Synthesizer::note_on/off(...)` for any MIDI events whose timestamps fall in this block
└── calls `Synthesizer::render(left, right)`
(4) Synthesizer::note_on(channel, key, velocity):
├── Look up which **PresetRegion** should respond (based on channel’s current bank/program).
├── For that PresetRegion, find the matching **InstrumentRegion**(s) for that key & velocity.
├── For each matching InstrumentRegion:
│ ├── Find a free voice slot (`self.voices[i]` where `i < maximum_polyphony` and voice isn’t already in use).
│ ├── Call `Voice::new( instrument_region, sample_rate )` to create a brand-new `Voice` struct.
│ ├── Initialize that voice’s fields (oscillator frequencies, envelope ADSR parameters, filter coefficients, etc.).
│ └── Store the new `Voice` (or a handle to it) in `self.voices[i]`.
└── Return, now that voice is “active.”
(5) Synthesizer::note_off(channel, key, velocity):
├── Search through `self.voices` for any voice whose channel/key match this note.
├── For each such voice, call `voice.note_off()` (which typically sets the envelope into its “release” stage).
└── Return (voice remains “active” until its envelope fully dies out, at which point Synthesizer may garbage-collect it next block).
(6) Synthesizer::render(left: &mut [f32], right: &mut [f32]):
├── Zero out `left[]` and `right[]` for this block.
├── For each active `voice` in `self.voices`:
│ └── Call `voice.process_block(&mut voice_buf_l, &mut voice_buf_r)`.
│ • Inside `Voice::process_block(...)`:
│ ├── For each sample index `n` in the block:
│ │ ├── `osc_sample = self.oscillator.next_sample()`
│ │ ├── `amp_envelope = self.envelope.next_amplitude()`
│ │ └── `mixed = osc_sample * amp_envelope`
│ │ (optionally: apply per-voice LFO mod, filters, etc.)
│ │
│ ├── Once the raw waveform is generated, run it through per-voice filters:
│ │ • e.g. `comb_out = comb_filter.process(mixed)`
│ │ • e.g. `all_pass_out = all_pass_filter.process(comb_out)`
│ │ • …repeat for each filter in `self.comb_filters` and `self.all_pass_filters`.
│ └── Write the final result into `voice_buf_l[n]` and/or `voice_buf_r[n]`.
├── Accumulate each voice’s output into the master block:
│ • `left[n] += voice_buf_l[n]`
│ • `right[n] += voice_buf_r[n]`
├── Once all voices have contributed, apply **global effects** in this order (by default):
│ 1. `self.reverb.process(&mut left, &mut right)`
│ 2. `self.chorus.process(&mut left, &mut right)`
├── Multiply each sample in `left[]` and `right[]` by `self.master_volume`.
└── Return from `render(…)`—the caller (sequencer or user code) now has a filled audio buffer.
4. Summary of “Who Calls Whom”
Below is a compact list of “call edges” at runtime, annotated with which module implements which function:
-
User →
SoundFont::new(&mut R: Read + Seek) : Result<SoundFont, SoundFontError>
(insound_font.rs
) -
User →
MidiFile::new(&mut R: Read + Seek) : Result<MidiFile, MidiFileError>
(inmidi_file.rs
) -
User →
Synthesizer::new(&Arc<SoundFont>, &SynthesizerSettings) : Result<Synthesizer, SynthesizerError>
(insynthesizer.rs
)
• inside this constructor, it calls:
–preset_lookup = SoundFont::build_preset_lookup()
(insound_font.rs
)
– allocatesVec<Voice>
slots (initially all “inactive”)
– constructsVec<ChannelState>
for 16 MIDI channels
– createsReverb::new(sample_rate)
(inreverb.rs
) andChorus::new(sample_rate)
(inchorus.rs
)
– storesblock_left = Vec::with_capacity(block_size)
,block_right = Vec::with_capacity(block_size)
, etc. -
User →
MidiFileSequencer::new(synth: Synthesizer) : MidiFileSequencer
(inmidi_file_sequencer.rs
)
• storessynth
internally, sets internal “cursor = 0,” no audio generated yet. -
User →
MidiFileSequencer::play(&MidiFile, loop_flag: bool)
(inmidi_file_sequencer.rs
)
• resets time to zero, sets up internal event iterator from theMidiFile
. -
Caller (sequencer or user) →
MidiFileSequencer::render(left, right)
(inmidi_file_sequencer.rs
)
• computes which MIDI events fall into this block’s timestamp range, and for each event:
– IfNoteOn
, callssynth.note_on(channel, key, velocity)
.
– IfNoteOff
, callssynth.note_off(channel, key, velocity)
.
• after processing all events, callssynth.render(left, right)
. -
Sequencer →
Synthesizer::note_on(channel, key, velocity)
(insynthesizer.rs
)
• looks up the appropriatePresetRegion
via a hash map built at construction.
• callsVoice::new( preset_region, sample_rate )
• stores thatVoice
in the first free slot ofself.voices
. -
Sequencer →
Synthesizer::note_off(channel, key, velocity)
(insynthesizer.rs
)
• finds matching voice(s), callsvoice.note_off()
. (Voice will enter its release phase.) -
Sequencer or user →
Synthesizer::render(left, right)
(insynthesizer.rs
)
• zeroes out bothleft
andright
buffers.
• loops over every activeVoice
inself.voices
and callsvoice.process_block(voice_buf_l, voice_buf_r)
.
• inside eachvoice.process_block(...)
(invoice.rs
):
– callsOscillator::next_sample()
repeatedly (inoscillator.rs
).
– callsEnvelope::next_amplitude()
for amplitude shaping (inenvelope.rs
).
– sends the raw sample through eachCombFilter::process(sample)
(incomb_filter.rs
).
– then through eachAllPassFilter::process(sample)
(inall_pass_filter.rs
).
– writes the final per-voice sample intovoice_buf_l[n]
andvoice_buf_r[n]
.
• the synth accumulates eachvoice_buf_l
/voice_buf_r
into the masterleft
/right
block.
• after all voices are done, calls:
–Reverb::process(left, right)
(inreverb.rs
), which internally runs a bank ofCombFilter
s andAllPassFilter
s to produce a stereo reverb tail.
–Chorus::process(left, right)
(inchorus.rs
), which applies a short, modulated delay to thicken the sound.
– scalesleft[]
andright[]
byself.master_volume
.
• returns.
5. “Who Depends on Whom” Recap
Below is a summary list of the modules (in descending dependency order), reiterating what we already sketched above:
-
error.rs
- Defines
MidiFileError
,SoundFontError
,SynthesizerError
. - No dependencies on other crate modules (beyond core/std).
- Defines
-
sound_font.rs
- Depends on
error::SoundFontError
andstd::io::{Read, Seek}
. - Exposes types like
SoundFont
,Preset
,InstrumentRegion
,SampleHeader
.
- Depends on
-
midi_file.rs
- Depends on
error::MidiFileError
and core I/O traits. - Exposes
MidiFile
,MidiEvent
, etc.
- Depends on
-
midi_file_sequencer.rs
- Depends on
midi_file::MidiFile
+MidiEvent
. - Depends on
synthesizer::Synthesizer
(calls itsnote_on
,note_off
,render
).
- Depends on
-
synthesizer_settings.rs
- No cross‐deps (just holds basic numeric fields).
-
synthesizer.rs
- Depends on:
sound_font::SoundFont
synthesizer_settings::SynthesizerSettings
voice::Voice
reverb::Reverb
chorus::Chorus
- Basic containers (
Vec
,Arc
, etc.)
- Depends on:
-
voice.rs
- Depends on:
oscillator::Oscillator
envelope::Envelope
comb_filter::CombFilter
all_pass_filter::AllPassFilter
- Also references some of the data structures from
sound_font
(e.g. theSampleHeader
inside anInstrumentRegion
).
- Depends on:
-
oscillator.rs
,envelope.rs
,comb_filter.rs
,all_pass_filter.rs
- These are leaf modules. They do not depend on any other RustySynth module. They implement low-level DSP building blocks (waveform generation, ADSR envelopes, comb/all-pass filters).
-
reverb.rs
- Depends on
comb_filter::CombFilter
andall_pass_filter::AllPassFilter
. - Implements a stereo reverb by chaining eight comb filters + four all-pass filters per channel.
- Depends on
-
chorus.rs
- Typically implements a simple stereo chorus (delay lines + LFO).
- No further cross-deps (just basic numeric math).
6. Putting It All Together
-
Build-time/compile-time structure
- At compile time, Cargo’s feature resolver v2 (see
resolver = "2"
inCargo.toml
) wires up all these modules into one library. - The
lib.rs
(inrustysynth/src/lib.rs
) has lines like:#![allow(unused)] fn main() { mod error; mod sound_font; mod midi_file; mod midi_file_sequencer; mod synthesizer_settings; mod synthesizer; mod voice; mod oscillator; mod envelope; mod comb_filter; mod all_pass_filter; mod reverb; mod chorus; pub use error::{MidiFileError, SoundFontError, SynthesizerError}; pub use sound_font::SoundFont; pub use midi_file::{MidiFile, MidiEvent}; pub use midi_file_sequencer::MidiFileSequencer; pub use synthesizer_settings::SynthesizerSettings; pub use synthesizer::Synthesizer; }
- This exports exactly the high-level types a user needs:
•SoundFont
(plus associated errors)
•MidiFile
(plus associated errors)
•SynthesizerSettings
•Synthesizer
(and its methods:note_on
,note_off
,render
)
•MidiFileSequencer
(and its methods:play
,render
)
- At compile time, Cargo’s feature resolver v2 (see
-
Run-time call graph
- The user first loads a SoundFont (calling into
sound_font::SoundFont::new(...)
). - Then they construct a
Synthesizer
, which in turn calls intoreverb::Reverb::new
,chorus::Chorus::new
, and sets up the voice‐pool (Vec<Voice>
) insidevoice.rs
. - Each time
note_on
is invoked,synthesizer::Synthesizer
instantiates aVoice
by callingvoice::Voice::new(...)
. That in turn calls constructors inoscillator
,envelope
,comb_filter
, andall_pass_filter
. - On every audio block,
Synthesizer::render
loops over voices and callsVoice::process_block
, which in turn calls:Oscillator::next_sample
(inoscillator.rs
)Envelope::next_amplitude
(inenvelope.rs
)CombFilter::process
(incomb_filter.rs
)AllPassFilter::process
(inall_pass_filter.rs
)
- The block of per-voice samples is summed into a master buffer, then handed to
Reverb::process
(inreverb.rs
) andChorus::process
(inchorus.rs
), and finally scaled bymaster_volume
.
- The user first loads a SoundFont (calling into
-
Sequencer integration
- If the user wants to play a MIDI file, they first call
MidiFile::new(...)
(inmidi_file.rs
) to parse tracks/events. - They then create a
MidiFileSequencer
(inmidi_file_sequencer.rs
), passing in theSynthesizer
. - Each time they call
sequencer.render(...)
, the sequencer:- Advances its internal time cursor by
block_size
samples. - Emits any scheduled
NoteOn
/NoteOff
events viaSynthesizer::note_on/ note_off
. - Calls
Synthesizer::render(...)
to fill the next block of audio.
- Advances its internal time cursor by
- If the user wants to play a MIDI file, they first call
In a Nutshell
- “Structure dependency” (compile-time):
error.rs
↑
sound_font.rs midi_file.rs
↑ ↑
synthesizer_settings.rs
↑
synthesizer.rs ←─ voice.rs ←─ (oscillator.rs, envelope.rs, comb_filter.rs, all_pass_filter.rs)
↑ ↑
midi_file_sequencer.rs └─ reverb.rs (also depends on comb & all_pass)
└─ chorus.rs
- “Call hierarchy” (run-time):
- User →
SoundFont::new
(parses SF2) - User →
Synthesizer::new
(builds voice pool, effect units) - (Optional) User →
MidiFile::new
(parses MIDI file) - (Optional) User →
MidiFileSequencer::new(synth)
- Each audio block →
- Sequencer →
note_on
/note_off
onSynthesizer
for timed events - Sequencer (or user-thread) →
Synthesizer::render(left, right)
•Synthesizer::render
→ calls eachVoice::process_block
•Voice::process_block
→Oscillator::next_sample
→Envelope::next_amplitude
→CombFilter::process
→AllPassFilter::process
• After all voices are summed,Synthesizer::render
→Reverb::process
→Chorus::process
→ scale by master volume.
- Sequencer →
This should give you a clear picture of (a) how the modules depend on one another in the source tree, and (b) how, at run time, each call eventually fans out into the low-level DSP building blocks. We will explore any particular module more deeply—e.g. the exact algorithm inside CombFilter::process
or how PresetRegion
data flows into Voice::new
next as needed.
Broadcasting
Internet broadcasting
- Live streaming web audio and video
- https://www.sourcefabric.org/en/airtime/, https://github.com/sourcefabric/airtime, manual
- https://www.liquidsoap.info/, tutorial
Movie production tools
Storyboarding
- Krita
- Opentoonz
- Tahoma fork
- Laidout
- Gegl-qt binding
- https://github.com/imgflo/imgflo ui for gegl
- https://youtu.be/FVpho_UiDAY
We will have bunch of loose topics here, later to be organized.
Notes
initial storyboarding thoughts. I will use pyqt.... all the speed critical operation will be in qt and c++ world, python binding will be used on top of that.
prototyping will be easy??? other option is to build the whole thing in c++, qt build the app in rust using qt-rust binding. how do we incorporate,
- gegl ??
- brush engine ??
sfx
fountain tools
3d modelling
color grading
Animation
AI generated physics based animation
Music
Drum patterns
rust midi library
We can load midi file in a rust based tauri app. Use svelte for for the app logic. We can try functions in rust to do even midi processing at native speed. or we can use wasm library using rust https://developer.mozilla.org/en-US/docs/WebAssembly/Rust_to_wasm. Although I cant really see much of a benefit with wasm over native rust function in tauri for all practical uses. We can use plain javascript for all the processing if we really need in browser processing.
The plan is to convert the midi structure to json and send to browser and then convert back to midi before saving.
rust based DAW
This is an interesting possibility, may be a tauri based user interface and backend in rust. We can probably utilize a pure rust cross platform audio library,
- https://github.com/RustAudio/cpal
- https://github.com/MeadowlarkDAW/Meadowlark
- https://youtu.be/Z4P5f6ZJ_nE
- https://youtu.be/Yom9E-67bdI
- https://github.com/SolarLiner/nih-reverb
- https://github.com/vizia/vizia for gui
- https://github.com/Auritia/Auritia
- https://npm.io/package/svelte-tauri-filedrop This will allow file drop in on DAW
- https://github.com/emilyskidsister/oxygen audio recording and playback using cpal
audio and vst3 with rust
open source plugins
- https://github.com/TheWaveWarden/odin2
- https://github.com/surge-synthesizer/surge
- https://github.com/zynaddsubfx/zynaddsubfx
- https://github.com/surge-synthesizer/stochas
- https://github.com/trummerschlunk/master_me for mastering
karaoke
- https://github.com/gyunaev/spivak
- https://github.com/magic-akari/lrc-maker it also is a player react app.
- https://github.com/outloudvi/lrcedit.js a simple example using web audio api for lrc edit
rust soundfont and synthesis
- https://github.com/sinshu/rustysynth
- https://github.com/PolyMeilex/OxiSynth fluidsynth in rust
- https://github.com/ameobea/web-synth fm synthesis
- https://www.youtube.com/watch?v=v0Qp7eWVyes wavetable synthesis
- https://rustrepo.com/repo/geom3trik-tuix_audio_synth
- https://github.com/geom3trik/vizia-audio-synth
rust libraries
- https://github.com/RazrFalcon/resvg svg rendering
- https://github.com/servo/servo browser
yamaha style file
- Style file specification (archived)
- https://github.com/bures/sff2-tools
- https://wierzba.homepage.t-online.de/StyleFileDescription_v21.pdf
- https://www.jjazzlab.com/en/
- https://psrtutorial.com/index.html
- https://youtu.be/be_0JnhI-Wc
- https://youtu.be/gEGd__2ZQc0
python libraries
- style (sff2) files api https://github.com/bures/sff2-tools
- https://github.com/bspaans/python-mingus
- SCAMP https://pypi.org/project/scamp/. doc http://scamp.marcevanstein.com/
AI datasets
tool for representing music for AI
Rust Audio player
Converting old DVDs/cd into mp4/mp3
DVDs
Old DVDs take space and bulky. I have tons of those that I accumulated over the years and running out of space in my cabinet. This will be an ongoing project for me to turn them into mp4 so that I can store them in a hard drive and view them from my TV.
You will need first install the following software before ripping a DVD:
- Handbrake
- Ubuntu Restricted Extras
- Libdvd-pkg
You can find Handbrake in the default Ubuntu repositories, but there's a decent chance that the package will be fairly outdated. Thankfully, the Handbrake developers maintain an official Ubuntu PPA.
Begin by opening a terminal window and typing the following command to add the PPA to your system.
sudo add-apt-repository ppa:stebbins/handbrake-releases
Now, update your package database, and install Handbrake.
sudo apt update
sudo apt install handbrake-gtk
This will install the video decoding software for converting DVDs to MP4.
Next, type the following command at the terminal prompt to install the restricted extras package. This will install a collection of codecs:
sudo apt install ubuntu-restricted-extras
During the installation, a blue screen will appear with a license agreement. Press Tab to highlight the option to accept the agreement, and press Enter.
Finally, install the libdvd-pkg to install a library that will let you play DVDs within Ubuntu by entering the following command:
sudo apt install libdvd-pkg
At the end of the process, you may get a message saying you need to run another apt-get command to continue installing the package. If you get this message, type the following command:
sudo dpkg-reconfigure libdvd-pkg
converting to ISO first and then use handbrake
The old linux machine which has a DVD drive is slow. My other machine is fast but there is no DVD drive. So I decided to rip the DVDs to ISO image first and then use the faster machine to turn into mp4.
Create an ISO disk image from a CD-ROM, DVD or Blu-ray disk.
First get block count. I am using /dev/dvd
. In your machine it could be /dev/sr0
. Make sure you are using the right device name for your machine.
isosize -d 2048 /dev/dvd
Now run dd command and display progress bar while using dd command:
$ sudo dd if=/dev/dvd of=output.iso bs=2048 count=<blocks> status=progress
Combining both in the same script,
blocks=$(isosize -d 2048 /dev/dvd)
dd if=/dev/sr0 of=isoimage.iso bs=2048 count=$blocks status=progress
Now you can use output.iso for hard disk installation or as a backup copy of CD/DVD media. Please note that dd command is standard UNIX command and you should able to create backup/iso image under any UNIX like operating system.
FYI, you can restore hard disk drive from a previously generated ISO image using the dd command itself using,
$ sudo dd if=output.iso of=/dev/dvd bs=4096 conv=noerror
Windows platform DVD decoding in Handbrake
Windows distribution lacks video decoder for some DVDs. You will see choppy output in such case. Download libdvdcss-2.dll file from VLC and copy into HandBrake directory. It should resolve the decoder issue.
I got these instructions from following sources. Read for more details.
Audio CDs
This recipe worked pretty good.
install cdparanoia
and lame
cdparanoia -vsQ
lists all the tracks.
cdparanoia -B
converts all the tracks in .wav
format. If the CD has bad tracks and you don't want those tracks then -X
option will not output those tracks.
cdparanoia -BX
Following python snippet will convert all the files tp mp3 using lame
.
#!/usr/bin/env python3
import os
from pathlib import Path
for path in Path('.').rglob('*.wav'):
newpath = path.with_name(path.stem.replace(' ', '-') +'.mp3')
cmd = "lame -V2 '{0}' '{1}';rm '{0}';".format(str(path), str(newpath))
print(cmd)
returned_value = os.system(cmd)
print(returned_value)
concat ts files
C:\ffmpeg\bin\ffmpeg.exe -f concat -safe 0 -i mylist.txt -c copy xyz.mp4
mylist.txt file (UTF-8),
file 'abc.ts'
file 'def.ts'
...
Text to Speech
Common models
Theses are all based on common models and forked from each other.
- https://github.com/coqui-ai/TTS Has low end, Sounds good for theatrical use. Based on following two.
- https://github.com/mozilla/TTS Nice, lacks low end. Sounds natural for musical use.
- https://github.com/erogol/WaveRNN
- https://github.com/rhasspy/piper
I used coqui tts in colab and tried different models and vocoder combinations they have and produced a zip file consisting of all the audio clips it produced. You can obtain the zip file here, if you are interested to take a listen to the audio clips.