Initial commit for blog based on Zola
1
.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
public/
|
4
.gitmodules
vendored
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
[submodule "themes/after-dark"]
|
||||||
|
path = themes/after-dark
|
||||||
|
url = https://github.com/getzola/after-dark
|
||||||
|
branch = master
|
29
config.toml
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
base_url = "https://sanchayanmaity.net"
|
||||||
|
title = "Coherent Musings"
|
||||||
|
theme = "after-dark"
|
||||||
|
output_dir = "public"
|
||||||
|
author = "Sanchayan Maity"
|
||||||
|
generate_sitemap = true
|
||||||
|
generate_robots_txt = true
|
||||||
|
compile_sass = true
|
||||||
|
build_search_index = false
|
||||||
|
|
||||||
|
[markdown]
|
||||||
|
highlight_code = true
|
||||||
|
highlight_theme = "monokai"
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
author = "Sanchayan Maity"
|
||||||
|
codeblock = true
|
||||||
|
enable_search = false
|
||||||
|
latex = true
|
||||||
|
after_dark_menu = [
|
||||||
|
{url = "$BASE_URL/posts", name = "Posts"},
|
||||||
|
{url = "$BASE_URL/about", name = "About"},
|
||||||
|
{url = "$BASE_URL/resume", name = "Resume"},
|
||||||
|
{url = "https://git.sanchayanmaity.net/sanchayanmaity", name = "Forgejo"},
|
||||||
|
{url = "https://gitlab.freedesktop.org/SanchayanMaity", name = "GitLab"},
|
||||||
|
{url = "https://sanchayanmaity.com/@sanchayan", name = "Mastodon"},
|
||||||
|
{url = "https://www.last.fm/user/sanchayan_maity", name = "Last.fm"},
|
||||||
|
{url = "https://keyoxide.org/41B3422B03703BF5FD03FAFB6F6A0609C12038F3", name = "Keyoxide"},
|
||||||
|
]
|
6
content/_index.md
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
+++
|
||||||
|
title = "Home"
|
||||||
|
paginate_by = 10
|
||||||
|
template = "base.html"
|
||||||
|
+++
|
||||||
|
|
61
content/about.md
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
+++
|
||||||
|
title = "About"
|
||||||
|
weight = 10
|
||||||
|
template = "about.html"
|
||||||
|
+++
|
||||||
|
|
||||||
|
Hello!!
|
||||||
|
|
||||||
|
Sanchayan Maity here. Life revolves around electronics, programming, music, books, and anime. Life long learner. Primarily interested in Haskell, Functional Programming, Compilers, Mathematics, FPGA, and Operating Systems.
|
||||||
|
|
||||||
|
Used to live in Mumbai and worked as an Embedded Systems Engineer from July 2010 to 2012 at Godrej Security Solutions, a sub division of Godrej & Boyce Mfg. Co. Ltd. Settled in Bangalore now. Worked at [Toradex](https://www.toradex.com/) for 5 years as a Development Engineer on Linux for their downstream BSP while also making [upstream contributions](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=author&q=sanchayan). Spend two years at Harman International working on a custom Linux FPGA driver for a frame grabber system. Currently work at [asymptotic](https://asymptotic.io) where work mostly involves GStreamer and Pipewire. Was lucky enough to get to work with Haskell for 8 months for a client as the first assignment.
|
||||||
|
|
||||||
|
Am passionate about open source and frequently attend meetups. Am often to be found at one of the below meetups on a weekend.
|
||||||
|
|
||||||
|
- [Rust Bangalore](https://hasgeek.com/rustbangalore)
|
||||||
|
- [FP India/Bangalore Haskell](https://hasgeek.com/fpindia)
|
||||||
|
- [Bengaluru Systems Meetup](https://hasgeek.com/bengalurusystemsmeetup)
|
||||||
|
- [Zig India](https://zigindia.org/)
|
||||||
|
|
||||||
|
Also an organizer and frequent speaker for the first two since the start of 2018.
|
||||||
|
|
||||||
|
# Tools
|
||||||
|
|
||||||
|
Favourite tools for use on a daily basis.
|
||||||
|
|
||||||
|
- **desktop**: sway
|
||||||
|
- **editor**: neovim
|
||||||
|
- **command-line**: tmux, foot, fzf, ripgrep
|
||||||
|
- **shell**: fish
|
||||||
|
- **browser**: firefox
|
||||||
|
|
||||||
|
In programming languages, statically typed languages in the vein of Haskell and Rust are preferred. While it's important to use the right tool for the job, don't think that dynamically typed languages are the right tool for anything. Software longevity, ease of refactoring and maintenance are absolutely essential and dynamically typed languages absolutely suck here, doesn't matter which one. Getting something out of the door in a few hours or a day or two isn't a metric to care about when it comes to programming languages, if that piece of software isn't going to be easy to maintain or refactor. If it's not easy to refactor and maintain, it's not worth writing that piece of code in the first place.
|
||||||
|
|
||||||
|
## Contact
|
||||||
|
|
||||||
|
Best way to contact would be via email. The part in front of the @ symbol doesn't matter much.
|
||||||
|
|
||||||
|
- [sanchayanmaity.net](mailto:me@sanchayanmaity.net)
|
||||||
|
- [sanchayanmaity.com](mailto:me@sanchayanmaity.com)
|
||||||
|
|
||||||
|
You can also reach out on [Matrix](https://matrix.to/#/@sanchayanmaity:matrix.org) or [Telegram](https://t.me/SanchayanMaity).
|
||||||
|
|
||||||
|
## Keys
|
||||||
|
|
||||||
|
Public key used for signing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
41B3 422B 0370 3BF5 FD03 FAFB 6F6A 0609 C120 38F3
|
||||||
|
```
|
||||||
|
|
||||||
|
Also available as DNS TXT record.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dig TXT openpgp.sanchayanmaity.net +short | sed 's/"//g'
|
||||||
|
```
|
||||||
|
|
||||||
|
Also available via Web Key Directory at https://keys.openpgp.org/.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gpg --locate-keys --auto-key-locate clear,nodefault,wkd sanchayanmaity.net
|
||||||
|
```
|
7
content/posts/_index.md
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
+++
|
||||||
|
title = "Posts"
|
||||||
|
template = "archive.html"
|
||||||
|
page_template = "page.html"
|
||||||
|
sort_by = "date"
|
||||||
|
paginate_by = 10
|
||||||
|
+++
|
113
content/posts/building-a-bsp.md
Normal file
|
@ -0,0 +1,113 @@
|
||||||
|
+++
|
||||||
|
title = "Building a Board Support Package using Yocto"
|
||||||
|
date = 2019-01-16
|
||||||
|
+++
|
||||||
|
|
||||||
|
This post assumes a basic working knowledge of Yocto/OpenEmbedded.
|
||||||
|
|
||||||
|
At the current workplace, for the project, am responsible for everything be it app, kernel configuration, custom drivers or existing driver enhancements or setting up the complete board support package for the system. While having worked on OpenEmbedded/Yocto at the previous company, had no experience setting up a complete BSP. This meant having a custom meta layer for this project along with the required layers for Yocto to work and also the vendor specific layer which in this case was Xilinx, but, could be any for that matter. Accordingly, one would include `meta-samsung`, `meta-ti` or `meta-intel` etc.
|
||||||
|
|
||||||
|
One of the first decisions to make was regarding the flashing process. There was an eMMC on-board and also a SD card. It's possible to have one's own script and then use u-boot's `mmc write` functions to format and write to the MMC. However, decided to go for the easier [genimage](https://github.com/pengutronix/genimage) approach. Yocto build system builds the relevant components and can pack it in a single image using a configuration like below.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
image boot.vfat {
|
||||||
|
vfat {
|
||||||
|
files = {
|
||||||
|
"BOOT.bin",
|
||||||
|
"emergency.ub",
|
||||||
|
"image.ub",
|
||||||
|
"system.dtb",
|
||||||
|
"Image"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
size = 128M
|
||||||
|
}
|
||||||
|
|
||||||
|
image ged.img {
|
||||||
|
hdimage {
|
||||||
|
}
|
||||||
|
|
||||||
|
partition boot {
|
||||||
|
partition-type = 0xC
|
||||||
|
bootable = "true"
|
||||||
|
image = "boot.vfat"
|
||||||
|
}
|
||||||
|
|
||||||
|
partition rootfs {
|
||||||
|
partition-type = 0x83
|
||||||
|
image = "rootfs-zynqmp.ext4"
|
||||||
|
size = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This gives a single file with an `img` extension which can be used to flash the SD card or eMMC using the **dd** command.
|
||||||
|
|
||||||
|
Now, to use Yocto requires the use of several meta layers to be put together. One way is to use the repo tool which is also used in the Android world. However, there was a need for something more along with Docker integration. Wondering if someone has already done this kind of work, went searching on Google and came across [yoe-distro](https://github.com/YoeDistro/yoe-distro).
|
||||||
|
|
||||||
|
`yoe-distro` provided a perfect template on which to base the work on. Only need integrate the layers required and it does the rest. Docker integration is also included. Now, in case you are wondering why Docker is required, Yocto/ OpenEmbedded builds can be sensitive to host distribution setup. Also in this case, Xilinx's support provided through Yocto would not build on Ubuntu 18.04. Downgrading would result in waste of time and isn't actually a solution. Even if worked on Ubuntu 18.04, future upgrades may result in the build not working.
|
||||||
|
|
||||||
|
Docker solves this problem by providing a sandboxed environment, think of `chroot` or BSD jails simplified and on steroids. So ended up integrating the vendor specific meta layers and added custom layer to provide customisations. Getting started with Yocto definitely requires putting in some effort, but, adding customisations and stuff is lot easy or at least has been easy for what was being done.
|
||||||
|
|
||||||
|
For example, there is a need to build the kernel from source, since there are some custom patches and if nothing else custom device trees which enable support for custom Xilinx IP core drivers are needed. One problem was internally the use of Atlassian Stash and trying to even just clone the source tree from Stash requires authentication. Wasn't sure how to do this in Yocto while without exposing SSH keys or account information. `yoe-distro` setup helped here.
|
||||||
|
|
||||||
|
The linux source is added as a git submodule just like the meta layers and then a custom recipe builds that.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SRC_URI = "git://${TOPDIR}/sources/ged4k-linux-kernel;protocol=file;branch=ged4k_v2018.2"
|
||||||
|
SRCREV = "fbec50e15d7b04f44690eb115d2cddd4423e0326"
|
||||||
|
|
||||||
|
SRC_URI_append += " file://defconfig \
|
||||||
|
file://fragment.cfg \
|
||||||
|
"
|
||||||
|
|
||||||
|
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
|
||||||
|
```
|
||||||
|
|
||||||
|
As can be seen, the protocol specified isn't git, but file. Another simpler example is u-boot, where there is a need to apply a custom patch on top of Xilinx's u-boot
|
||||||
|
source and just the below in a recipe appending the original one.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
|
||||||
|
SRC_URI_append = "\
|
||||||
|
file://0001-configs-xilinx_zynqmp-Config-SD-and-eMMC-boot-for-GE.patch \
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
Also had to include a `rl78flash` program which wasn't provided by any of the meta layers in Yocto and even that was easy. Looked at how some other recipes were written and came up with this and it just worked.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SUMMARY = "RL78 Flash Tool"
|
||||||
|
DESCRIPTION = "Software to program RL78 microcontrollers via Serial Bootloader"
|
||||||
|
LICENSE = "MIT"
|
||||||
|
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
|
||||||
|
|
||||||
|
SRCREV = "d0f5f05fbbce0a278658ca0caf67f293dcd26ee3"
|
||||||
|
SRC_URI = "git://github.com/msalau/rl78flash.git;protocol=https;branch=master \
|
||||||
|
file://rl78flash.patch \
|
||||||
|
"
|
||||||
|
|
||||||
|
S = "${WORKDIR}/git"
|
||||||
|
|
||||||
|
EXTRA_OEMAKE = "'CC=${CC}' 'CFLAGS=${CFLAGS}'"
|
||||||
|
|
||||||
|
do_compile() {
|
||||||
|
oe_runmake
|
||||||
|
}
|
||||||
|
|
||||||
|
do_install() {
|
||||||
|
install -d ${D}${bindir}
|
||||||
|
install -m 0755 ${WORKDIR}/git/rl78flash ${D}${bindir}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Yocto by default uses `sysvinit`, however, wanted to use systemd as init. Adding the below to `local.conf` makes systemd as the init system.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
DISTRO_FEATURES_append = " systemd"
|
||||||
|
DISTRO_FEATURES_BACKFILL_CONSIDERED += "sysvinit"
|
||||||
|
VIRTUAL-RUNTIME_init_manager = "systemd"
|
||||||
|
VIRTUAL-RUNTIME_initscripts = "systemd-compat-units"
|
||||||
|
```
|
||||||
|
|
||||||
|
It has been a lot fun learning how to put together a board support package and Yocto has given so much flexibility.
|
16
content/posts/experience-as-speaker-at-meetup.md
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
title = "My experience as a speaker at meetup groups"
|
||||||
|
date = 2018-10-08
|
||||||
|
+++
|
||||||
|
|
||||||
|
After joining the first meetup group and going on a regular basis, never thought about ending up being a speaker at those meetups. Joined the [rustox](https://www.meetup.com/rustox/) meetup group in May 2017 and have been a regular since then. This meetup group was started by Ragini and Saifi, two wonderful people. They used to conduct this meetup every Sunday morning and every Sunday morning ended up attending these meetups. Learned a lot from both of them. So when in the end of January 2018, they had to leave Bengaluru for a while for working on a project which they had taken for their own company, was asked to become the organizer and speaker. Had never thought of being a speaker and ended up speaking almost every other weekend.
|
||||||
|
|
||||||
|
Saifi had envisioned a series on Rust topics for beginners. The goals for 2018 are outlined [here](https://medium.com/@rustox/rust-bangalore-2018-goals-f1f16397cd47). However, it felt after a while that catering just to beginners and with people always coming on-board, no progress can be beyond a certain point. There had already been a beginners series in 2017. With that in mind and after talking with Raj, who was now the organizer for sister group [FOSS Cafe](https://www.meetup.com/fosscafe/), it was decided that there would be a fundamental topic one week and the next week some crate or project would serve as the topic of discussion or presentation. And this would alternate and keep on continuing. This also seemed in-line with the bi-modal approach which Saifi had envisioned. The first project looked at was [Gutenberg](https://www.getgutenberg.io/), a static website engine in Rust, which is also what this blog has been generated with (at the time of writing at least). On the way along with fundamental topics, crates/projects like Rocket, Diesel, Crossbeam, mio, tokio, and futures were covered.
|
||||||
|
|
||||||
|
There is an excellent write-up [here](https://dev.to/aspittel/public-speaking-as-a-developer-2ihj) by [Ali Spittel](https://twitter.com/ASpittel) on why you should consider public speaking as a developer. Personally, have had found two motivations for organizing and speaking at meetup groups. First, it acts as a great way to reinforce what has been learned in the process of preparing for a meetup talk. It can also be something one has been interested in learning and having spend a certain amount of time on a topic, decided to speak on it. This is what was being done for the [Functional Programmers Meetup](https://www.meetup.com/Bangalore-Functional-Programmers-Meetup/) as well, where I frequently speak on a topic in Haskell. Often I found, that the understanding has improved after giving the talk or I didn't understand something, but, during the discussion process it became much more clear. Or someone from the audience has helped understand. Second, I have met some interesting and wonderful people along the way. It's always nice to meet people who are passionate about something and spend their time on learning and picking it up. People with different experiences from different domains.
|
||||||
|
|
||||||
|
However, it's not all hunky dory. One of the challenges with organizing such meetups is engaging people. Am pretty sure am not great at this, however, I try. Different people have different interests and it's not always possible to cater to everyone. Also, Rust though a great language has a steeper learning curve in comparison to something like Python. Unless one is willing to invest some time on a continuous basis, it might be difficult to pick up depending on what kind of a background one is coming from.
|
||||||
|
|
||||||
|
Sometimes, one gets asked a question which might be difficult to answer objectively. The question was "What's the killer app for Rust?" or it was something definitely similar. Rust stands on the basis of what it offers and purely on technical merit, not to mention the great community around it. Hats off to Mozilla, Steve Klabnik, Aaron Turon and various others who have been working on Rust. Another difficulty is only one third of the people who RSVP actually turn up and having a consistent set of 10-15 members willing to speak or contribute code wise has been next to impossible.
|
||||||
|
|
||||||
|
Now, that 2018 is coming to a close, am wondering what the plan for 2019 should be to take this forward and improve on what has been done in 2018. Having seen this [talk](https://www.youtube.com/watch?v=2xyZeovFqCA) recently, think going forward, should try and understand how something like mio or Diesel actually works and has been implemented. The aim should to be producers and contributors and not just consumers. And last but not the least a big thanks to [HackerEarth](https://www.hackerearth.com/) who have been forthcoming in providing us the venue for meetups in last few months and also to [91SpringBoard](http://www.91springboard.com/) and [ThoughtFactory](https://www.axisbank.com/thoughtfactory/index.html) where a lot of earlier meetups were conducted.
|
257
content/posts/gdb-scripting.md
Normal file
|
@ -0,0 +1,257 @@
|
||||||
|
+++
|
||||||
|
title = "Automate debugging using GDB scripting"
|
||||||
|
date = 2021-11-13
|
||||||
|
+++
|
||||||
|
|
||||||
|
For a while, have had the pleasure of working on a GStreamer
|
||||||
|
[plugin](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/572)
|
||||||
|
in Rust at work. The plugin basically rounds the corners of an incoming video,
|
||||||
|
something akin to the `border-radius` property in CSS. Below is how it looks
|
||||||
|
like when running on a video.
|
||||||
|
|
||||||
|
![](/roundedcorners.jpg)
|
||||||
|
|
||||||
|
The GStreamer pipeline for the same.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gst-launch-1.0 filesrc location=~/Downloads/bunny.mp4 ! decodebin ! videoconvert ! roundedcorners border-radius-px=100 ! videoconvert ! gtksink
|
||||||
|
```
|
||||||
|
|
||||||
|
This was the first time working on a video plugin in GStreamer. Had a lot to
|
||||||
|
learn on how to use the `BaseTransform` class from GStreamer, among other
|
||||||
|
things. Without getting into the GStreamer specific details here, basically ran
|
||||||
|
into a problem for which needed some debugging for figuring out what was going
|
||||||
|
on in the internals of GStreamer.
|
||||||
|
|
||||||
|
Now, while using GDB from the command line has never been a problem, but, the
|
||||||
|
straight forward regular approach is time-consuming. Start the pipeline, then
|
||||||
|
attach gdb to a running process, place breakpoints by manually typing out the
|
||||||
|
whole thing and then start. For one off debugging sessions, where perhaps you
|
||||||
|
just want to inspect the backtrace from a crash or may be look into a
|
||||||
|
deadlock condition where your code hung, this could be fine. However, when you
|
||||||
|
have to repeat this multiple times do a source code change compile and then
|
||||||
|
select again it becomes frustrating.
|
||||||
|
|
||||||
|
## GDB Dashboard
|
||||||
|
|
||||||
|
Looking for a better way, [gdb-dashboard](https://github.com/cyrus-and/gdb-dashboard)
|
||||||
|
is what first came up as an option. This is quite useful since it can give the needed
|
||||||
|
information without having to type anything. Using gdb
|
||||||
|
[hooks](https://git.sr.ht/~sanchayanmaity/dotfiles/tree/master/item/gdb/.gdbinit.d/hooks),
|
||||||
|
the dashboard can be triggered when appropriate. See the rest of the [gdb
|
||||||
|
configuration](https://git.sr.ht/~sanchayanmaity/dotfiles/tree/master/item/gdb/.gdbinit.d)
|
||||||
|
to get an idea. This is useful in scenarios like where code is stuck due to a
|
||||||
|
deadlock and one needs to look at the backtrace of a crash or any such one off
|
||||||
|
simple investigation.
|
||||||
|
|
||||||
|
## Construct breakpoint command in neovim & copy to clipboard
|
||||||
|
|
||||||
|
The next small improvement is more specific to neovim. Navigating source code
|
||||||
|
with neovim opened in one kitty tab and gdb running in terminal in next tab or
|
||||||
|
a split is a preferred workflow personally. Being able to place a breakpoint
|
||||||
|
without having to type anything out on the gdb prompt would be convenient. The
|
||||||
|
vimscript code below generates the gdb command, considering the current line
|
||||||
|
and file on which the cursor is at in the source when opened in neovim.
|
||||||
|
|
||||||
|
```vimscript
|
||||||
|
function! CopyBpLocToClipboard() abort
|
||||||
|
let linenumber = line(".")
|
||||||
|
let filepath = expand("%")
|
||||||
|
let breakpoint = "break " . filepath . ":" . linenumber
|
||||||
|
silent execute "!wl-copy " . breakpoint
|
||||||
|
endfunction
|
||||||
|
|
||||||
|
nnoremap <silent> <Leader>yb :<C-U>call CopyBpLocToClipboard()<CR>
|
||||||
|
```
|
||||||
|
|
||||||
|
By using the preceding key binding, a command like below gets copied to the
|
||||||
|
clipboard which can be just pasted on gdb prompt.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/video-frame.c:104
|
||||||
|
```
|
||||||
|
|
||||||
|
Nifty!!!
|
||||||
|
|
||||||
|
## GDB scripting
|
||||||
|
|
||||||
|
Now imagine a scenario where perhaps one wants to look at multiple places in the
|
||||||
|
source code and when the program is running, inspect certain variables or just
|
||||||
|
print out a back trace each time a specific code point is reached.
|
||||||
|
|
||||||
|
The manual way to do this is to load the executable in gdb or attach to a
|
||||||
|
running process, place a break point, run, inspect the local variables or print
|
||||||
|
stack trace, place the next break point and repeat this whole process. Just
|
||||||
|
time-consuming.
|
||||||
|
|
||||||
|
GDB can completely automate the preceding process like below.
|
||||||
|
|
||||||
|
Below is the `.gdbinit` file applicable for the problem facing encountered at
|
||||||
|
work. This is what's called a command file by gdb.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
set confirm off
|
||||||
|
set breakpoint pending on
|
||||||
|
set logging on
|
||||||
|
set logging overwrite on
|
||||||
|
set print pretty on
|
||||||
|
set pagination off
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/video-frame.c:104 if meta->n_planes == 4
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:228
|
||||||
|
break subprojects/gstreamer/gst/gstbuffer.c:1410
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:231
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:237
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/video-frame.c:136
|
||||||
|
|
||||||
|
commands 1
|
||||||
|
print i
|
||||||
|
print *frame
|
||||||
|
enable 2
|
||||||
|
enable 3
|
||||||
|
enable 4
|
||||||
|
enable 5
|
||||||
|
enable 6
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
commands 2
|
||||||
|
print offset
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
commands 3
|
||||||
|
print offset
|
||||||
|
print size
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
commands 4
|
||||||
|
print *(GstBufferImpl *)buffer
|
||||||
|
print idx
|
||||||
|
print length
|
||||||
|
print skip
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
commands 5
|
||||||
|
disable 2
|
||||||
|
disable 3
|
||||||
|
disable 4
|
||||||
|
print *(GstBufferImpl *)buffer
|
||||||
|
print info->data
|
||||||
|
print skip
|
||||||
|
print *data
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
commands 6
|
||||||
|
print *frame
|
||||||
|
quit
|
||||||
|
end
|
||||||
|
|
||||||
|
disable 2
|
||||||
|
disable 3
|
||||||
|
disable 4
|
||||||
|
disable 5
|
||||||
|
disable 6
|
||||||
|
run
|
||||||
|
```
|
||||||
|
|
||||||
|
Below is the command to debug the GStreamer plugin in this pipeline with gdb.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gdb --nx -x .gdbinit --args env RUST_BACKTRACE=1 GST_DEBUG=3,basetransform:6 GST_PLUGIN_PATH=$GST_PLUGIN_PATH:~/GitSources/gst-plugins-rs/target/debug gst-launch-1.0 filesrc location=~/Downloads/bunny.mp4 ! decodebin ! videoconvert ! video/x-raw,format=I420 ! roundedcorners border-radius-px=100 ! video/x-raw,format=A420 ! videoconvert ! gtksink
|
||||||
|
```
|
||||||
|
|
||||||
|
In the preceding command, the `-x` parameter tells gdb to use the command file.
|
||||||
|
The `--nx` flag tells gdb to not read any `.gdbinit` files in any directory, as
|
||||||
|
`gdb-dashboard` isn't intended to be used for this. `--args` is how one tells
|
||||||
|
gdb what to run, which is the GStreamer pipeline in this case. See `gdb --help`
|
||||||
|
for details on the flags.
|
||||||
|
|
||||||
|
Now, consider what the command file does. The ones below are just some
|
||||||
|
settings for gdb to use. Note that logging and pretty printing are enabled.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
set confirm off
|
||||||
|
set breakpoint pending on
|
||||||
|
set logging on
|
||||||
|
set logging overwrite on
|
||||||
|
set print pretty on
|
||||||
|
set pagination off
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, specify the breakpoints. There are six breakpoints. These are the source
|
||||||
|
code locations of interest.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/video-frame.c:104 if meta->n_planes == 4
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:228
|
||||||
|
break subprojects/gstreamer/gst/gstbuffer.c:1410
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:231
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/gstvideometa.c:237
|
||||||
|
break subprojects/gst-plugins-base/gst-libs/gst/video/video-frame.c:136
|
||||||
|
```
|
||||||
|
|
||||||
|
Breakpoints can be enabled conditionally. The `if meta->n_planes == 4` implies
|
||||||
|
to consider this breakpoint only when a video frame with 4 planes is received.
|
||||||
|
|
||||||
|
Next gdb has to be told what should be done when each of the preceding
|
||||||
|
breakpoints is hit.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
commands 1
|
||||||
|
print i
|
||||||
|
print *frame
|
||||||
|
enable 2
|
||||||
|
enable 3
|
||||||
|
enable 4
|
||||||
|
enable 5
|
||||||
|
enable 6
|
||||||
|
continue
|
||||||
|
end
|
||||||
|
```
|
||||||
|
|
||||||
|
`commands 1` implies these are the commands for gdb to execute when breakpoint
|
||||||
|
1 is hit. When breakpoint 1 is hit, the value of `i` and `frame` gets printed.
|
||||||
|
The other breakpoints get enabled only after the first one is hit. This is
|
||||||
|
because at the end of command file, the following commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
disable 2
|
||||||
|
disable 3
|
||||||
|
disable 4
|
||||||
|
disable 5
|
||||||
|
disable 6
|
||||||
|
```
|
||||||
|
|
||||||
|
instruct gdb to start with these breakpoints off. These get enabled only
|
||||||
|
when breakpoint 1 is hit. The `continue` just tells gdb to continue, as gdb
|
||||||
|
shouldn't stop on hitting a breakpoint and logs can be inspected in the
|
||||||
|
end using gdb log.
|
||||||
|
|
||||||
|
Other breakpoints are specified similarly.
|
||||||
|
|
||||||
|
The `run` at the end tells gdb to start executing immediately. In normal usage
|
||||||
|
one would have to explicitly type `run` on the gdb prompt to make gdb start
|
||||||
|
debugging.
|
||||||
|
|
||||||
|
If it's not clear so far, basically whatever gdb commands would have been used
|
||||||
|
for debugging at the gdb prompt, is what gets specified in the command file as
|
||||||
|
well.
|
||||||
|
|
||||||
|
After running the below on the terminal
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gdb --nx -x .gdbinit --args env RUST_BACKTRACE=1 GST_DEBUG=3,basetransform:6 GST_PLUGIN_PATH=$GST_PLUGIN_PATH:~/GitSources/gst-plugins-rs/target/debug gst-launch-1.0 filesrc location=~/Downloads/bunny.mp4 ! decodebin ! videoconvert ! video/x-raw,format=I420 ! roundedcorners border-radius-px=100 ! video/x-raw,format=A420 ! videoconvert ! gtksink
|
||||||
|
```
|
||||||
|
|
||||||
|
The pipeline gets executed by gdb, considering the command file it was passed
|
||||||
|
and log whatever it was asked to log when each breakpoint is encountered. Since
|
||||||
|
logging and pretty printing were enabled earlier, gdb logs everything in
|
||||||
|
default `gdb.txt` file. The exact log text file can be seen
|
||||||
|
[here](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/572#note_1107146),
|
||||||
|
with `gdbinit` and the other two log files attached.
|
||||||
|
|
||||||
|
Now, one can comfortably look at this log and see what's going on. Once the
|
||||||
|
command file is written, the whole debugging process is completely automated.
|
||||||
|
Run, sit back and then look at the logs.
|
||||||
|
|
||||||
|
Using gdb is now a breeze and hassle-free experience. Being able to automate
|
||||||
|
and log the debugging process like this, also means you could share your
|
||||||
|
command file and someone else can replicate this.
|
52
content/posts/gstreamer-hackfest-2024.md
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
+++
|
||||||
|
title = "Notes from GStreamer spring hackfest of 2024"
|
||||||
|
date = 2024-07-30
|
||||||
|
+++
|
||||||
|
|
||||||
|
Some time has passed since the GStreamer spring hackfest took place in Thessaloniki, Greece in the month of May. Second time attending the GStreamer hackfest and thought about summarizing some of my thoughts this time around.
|
||||||
|
|
||||||
|
## Thanks
|
||||||
|
|
||||||
|
Before getting into the details, want to send out a thank you to:
|
||||||
|
|
||||||
|
- The GStreamer foundation for sponsoring the event as a whole
|
||||||
|
- Sebastian, Vivia and Jordan for making all the arrangements
|
||||||
|
- Asymptotic, for sponsoring my presence at the event
|
||||||
|
|
||||||
|
## The event
|
||||||
|
|
||||||
|
[At the hackfest](https://social.cologne/@francisv/112518409917632684).
|
||||||
|
|
||||||
|
It was good to see some familiar faces at the event, folks whom I had met at the previous hackfest and conference. Also nice when you finally meet people you have only conversed with online and get to put a face on the online persona you have been conversing with.
|
||||||
|
|
||||||
|
## Work
|
||||||
|
|
||||||
|
Originally the plan was to work on adding stream multiplexing support to [QUIC](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/tree/main/net/quinn/src?ref_type=heads) elements. However, missed pushing some of the work to GitLab which was on desktop and decided to work on that later.
|
||||||
|
|
||||||
|
### HTTP Live Streaming (HLS)
|
||||||
|
|
||||||
|
A merge request for adding [multi-variant playlist support with HLS](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1515) has been pending review for a while. One of the features missing from that merge request was support for codec string generation when using MPEG-TS with H-264 and H-265. Decided to work on that.
|
||||||
|
|
||||||
|
H-264 or H-265 has what are known as `stream-formats`. H-264 or H-265 can be stream oriented or packet oriented. In the case of former, `stream-format` is said to be `byte-stream`, while in the case of latter, `stream-format` is said to be `avc`. For `byte-stream`, the required parameter sets are sent in-band with the video, but for `avc` in GStreamer, the video metadata is conveyed via an additional caps field named `codec_data` which can be considered as out-of-band. `codec_data` is only present when the video is packet oriented, that's when `stream-format` is `avc`, this value represents an `AVCDecoderConfigurationRecord` structure.
|
||||||
|
|
||||||
|
GStreamer already has helper functions in codec utilities which can provide information like `profile-level` which are required for constructing codec strings. However, these helper functions require the existence of `codec_data`.
|
||||||
|
|
||||||
|
When using MPEG-TS as the container, the only possible `stream-format` is `byte-stream` with H-264 or H-265. In this case, one needs to parse the in-band information for getting information like `profile-level` or other video metadata. In Rust, there is the [`cros-codecs`](https://docs.rs/cros-codecs/latest/cros_codecs/codec/index.html) crate which has a parser module. Using this, it was easy to parse the in-band data and then generate the codec string required for HLS playlist.
|
||||||
|
|
||||||
|
### Threadshare
|
||||||
|
|
||||||
|
Before the hackfest, had spend some time on understanding the [`threadshare`](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/tree/main/generic/threadshare/src?ref_type=heads) elements. Met François Laignel at the hackfest who helped with clearing doubts I had with how some of the code was laid out in `threadshare`.
|
||||||
|
|
||||||
|
If you are interested in understanding about what makes the `threadshare` elements different, highly recommend going through the blog post [here](https://coaxion.net/blog/2018/04/improving-gstreamer-performance-on-a-high-number-of-network-streams-by-sharing-threads-between-elements-with-rusts-tokio-crate/).
|
||||||
|
|
||||||
|
There was some `end-of-stream` handling missing with the `threadshare`, `tcpclientsrc` and `udpsrc` elements. Spend some time working on adding support for that, which has now been [merged](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1581#d18be81ee1a92c24cf3786f98c51bc93bd1ce661) upstream.
|
||||||
|
|
||||||
|
## Play
|
||||||
|
|
||||||
|
After the three days of hackfest, a day trip was planned to the Palace of Aigai.
|
||||||
|
|
||||||
|
[GStreamer hackers & co. exploring the Palace of Aigai](https://social.cologne/@francisv/112530301636663783).
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
All in all, this hackfest turned out to be a productive and fun filled hackfest. Also, have to add that Greek cuisine is excellent and look forward to the next hackfest and visiting Thessaloniki/Greece again.
|
293
content/posts/pipewire-compressed-offload.md
Normal file
|
@ -0,0 +1,293 @@
|
||||||
|
+++
|
||||||
|
title = "Supporting ALSA compressed offload in PipeWire"
|
||||||
|
date = 2024-03-18
|
||||||
|
+++
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
|
**Editor's note**: this work was completed in late 2022 but this post was unfortunately delayed.
|
||||||
|
|
||||||
|
Modern day audio hardware these days comes integrated with Digital Signal
|
||||||
|
Processors integrated in SoCs and audio codecs. Processing compressed or encoded
|
||||||
|
data in such DSPs results in power savings in comparison to carrying out such
|
||||||
|
processing on the CPU.
|
||||||
|
|
||||||
|
```
|
||||||
|
+---------+ +---------+ +---------+
|
||||||
|
| CPU | ---> | DSP | ---> | Codec |
|
||||||
|
| | <--- | | <--- | |
|
||||||
|
+---------+ +---------+ +---------+
|
||||||
|
```
|
||||||
|
|
||||||
|
This post takes a look at how all this works.
|
||||||
|
|
||||||
|
## Audio processing
|
||||||
|
|
||||||
|
A traditional audio pipeline, might look like below. An application reads encoded
|
||||||
|
audio and then might leverage a media framework like GStreamer or library like
|
||||||
|
ffmpeg to decode the encoded audio to PCM. The decoded audio stream is then handed
|
||||||
|
off to an audio server like PulseAudio or PipeWire which eventually hands it off
|
||||||
|
to ALSA.
|
||||||
|
|
||||||
|
```
|
||||||
|
+----------------+
|
||||||
|
| Application |
|
||||||
|
+----------------+
|
||||||
|
| mp3
|
||||||
|
+----------------+
|
||||||
|
| GStreamer |
|
||||||
|
+----------------+
|
||||||
|
| pcm
|
||||||
|
+----------------+
|
||||||
|
| PipeWire |
|
||||||
|
+----------------+
|
||||||
|
| pcm
|
||||||
|
+----------------+
|
||||||
|
| ALSA |
|
||||||
|
+----------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
With ALSA Compressed offload, the same audio pipeline would look like this. The
|
||||||
|
encoded audio stream would be passed through to ALSA. ALSA would then, via it's
|
||||||
|
compressed offload API, send the encoded data to the DSP. DSP does the decode
|
||||||
|
and render.
|
||||||
|
|
||||||
|
```
|
||||||
|
+----------------+
|
||||||
|
| Application |
|
||||||
|
+----------------+
|
||||||
|
| mp3
|
||||||
|
+----------------+
|
||||||
|
| GStreamer |
|
||||||
|
+----------------+
|
||||||
|
| mp3
|
||||||
|
+----------------+
|
||||||
|
| PipeWire |
|
||||||
|
+----------------+
|
||||||
|
| mp3
|
||||||
|
+----------------+
|
||||||
|
| ALSA |
|
||||||
|
+----------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
Since the processing of the compressed data is handed to a specialised hardware
|
||||||
|
namely the DSP, this results in a dramatic reduction of power consumption compared
|
||||||
|
to CPU based processing.
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
- ALSA Compressed Offload API which is a different API compared to the ALSA PCM
|
||||||
|
interface, provides the control and data streaming interface for audio DSPs.
|
||||||
|
This API is provided by the [tinycompress](https://github.com/alsa-project/tinycompress)
|
||||||
|
library.
|
||||||
|
|
||||||
|
- With PCM there is the notion of `bytes ~ time`. For example, 1920 bytes,
|
||||||
|
S16LE, 2 channels, 48 KHz would correspond to 10 ms. This breaks down
|
||||||
|
for compressed streams. It's impossible to estimate reliably the duration of
|
||||||
|
audio buffers when handling most compressed data.
|
||||||
|
|
||||||
|
- While sampling rate, number of channels and bits per sample are enough to
|
||||||
|
completely specify PCM, various parameters may have to be specified to enable
|
||||||
|
the DSP to deal with multiple compressed formats.
|
||||||
|
|
||||||
|
- For some codecs, additional firmware has to be loaded by the DSP. This has to
|
||||||
|
be handled outside the context of audio server.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Expose all possible compressed formats.
|
||||||
|
|
||||||
|
- Allow a client to negotiate the format.
|
||||||
|
|
||||||
|
- Stream encoded audio frames and not PCM.
|
||||||
|
|
||||||
|
## PipeWire
|
||||||
|
|
||||||
|
PipeWire has become the default sound server on Linux, handling multimedia
|
||||||
|
routing and audio pipeline processing. It offers capture and playback for
|
||||||
|
both audio and video with minimal latency and support for PulseAudio, JACK,
|
||||||
|
ALSA, and GStreamer-based applications.
|
||||||
|
|
||||||
|
## SPA
|
||||||
|
|
||||||
|
PipeWire is built on top of SPA (Simple Plugin API), a header only API for
|
||||||
|
building plugins. SPA provides a set of low-level primitives.
|
||||||
|
|
||||||
|
SPA plugins are shared libraries (.so files) that can be loaded at runtime.
|
||||||
|
Each library provides one or more `factories`, each of which may implement
|
||||||
|
several `interfaces`.
|
||||||
|
|
||||||
|
The most interesting interface is the `node`.
|
||||||
|
|
||||||
|
- A node consumes or produces buffers through ports.
|
||||||
|
|
||||||
|
- In addition to ports and other well defined interface methods, a node can have
|
||||||
|
events and callbacks.
|
||||||
|
|
||||||
|
Ports are also first class objects within the node.
|
||||||
|
|
||||||
|
- There are a set of port related interface methods on the node.
|
||||||
|
|
||||||
|
- There may be statically allocated ports in instance initialization.
|
||||||
|
|
||||||
|
- There can be dynamic ports managed with `add_port` and `remove_port` methods.
|
||||||
|
|
||||||
|
- Ports have `params` which can be queried using the `port_enum_params` method
|
||||||
|
to determine the list of formats `EnumFormat`, the currently configured format
|
||||||
|
`Format`, buffer configuration, latency information, `I/O areas` for data structures
|
||||||
|
shared by port, and other such information.
|
||||||
|
|
||||||
|
- Some params such as the selected format can be set using the `port_set_format`
|
||||||
|
method.
|
||||||
|
|
||||||
|
## Implementing compressed sink SPA node
|
||||||
|
|
||||||
|
This section covers some primary implementation details of a PipeWire SPA node which
|
||||||
|
can accept an encoded audio stream and then write it out using ALSA compressed
|
||||||
|
offload API.
|
||||||
|
|
||||||
|
```c
|
||||||
|
static const struct spa_node_methods impl_node = {
|
||||||
|
SPA_VERSION_NODE_METHODS,
|
||||||
|
.add_listener = impl_node_add_listener,
|
||||||
|
.set_callbacks = impl_node_set_callbacks,
|
||||||
|
.enum_params = impl_node_enum_params,
|
||||||
|
.set_io = impl_node_set_io,
|
||||||
|
.send_command = impl_node_send_command,
|
||||||
|
.add_port = impl_node_add_port,
|
||||||
|
.remove_port = impl_node_remove_port,
|
||||||
|
.port_enum_params = impl_node_port_enum_params,
|
||||||
|
.port_set_param = impl_node_port_set_param,
|
||||||
|
.port_use_buffers = impl_node_port_use_buffers,
|
||||||
|
.port_set_io = impl_node_port_set_io,
|
||||||
|
.process = impl_node_process,
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
Some key node methods defining the actual implementation are as follows.
|
||||||
|
|
||||||
|
### **`port_enum_params`**
|
||||||
|
|
||||||
|
`params` for ports are queried using this method. This is akin to finding out
|
||||||
|
the capabilities of a port on the node.
|
||||||
|
|
||||||
|
For the compressed sink SPA node, the following are present.
|
||||||
|
|
||||||
|
- `EnumFormat`
|
||||||
|
|
||||||
|
This builds up a list of the encoded formats that's handled by the node to
|
||||||
|
return as a result.
|
||||||
|
|
||||||
|
- `Format`
|
||||||
|
|
||||||
|
Returns the currently set format on the port.
|
||||||
|
|
||||||
|
- `Buffers`
|
||||||
|
|
||||||
|
Provides information on size, minimum, and maximum number of buffers to be used
|
||||||
|
when streaming data to this node.
|
||||||
|
|
||||||
|
- `IO`
|
||||||
|
|
||||||
|
The node exchanges information via `IO` areas. There are various type of `IO`
|
||||||
|
areas like buffers, clock, position. Compressed sink SPA node only advertises
|
||||||
|
`buffer` areas at the moment.
|
||||||
|
|
||||||
|
The results are returned in an [SPA POD](https://docs.pipewire.org/page_spa_pod.html).
|
||||||
|
|
||||||
|
### **`port_use_buffers`**
|
||||||
|
|
||||||
|
Tells the port to use the given buffers via the `IO` area.
|
||||||
|
|
||||||
|
### **`port_set_param`**
|
||||||
|
|
||||||
|
The various `params` on the port are set via this method.
|
||||||
|
|
||||||
|
`Format` `param` request sets the actual encoded format that's going to be streamed
|
||||||
|
to this SPA node by a pipewire client like `pw-cat` or application for sending to
|
||||||
|
the DSP.
|
||||||
|
|
||||||
|
### **`process`**
|
||||||
|
|
||||||
|
Buffers containing the encoded media are handled here. The media stream is
|
||||||
|
written to the IO buffer area which were provided in `use_buffers`. The encoded
|
||||||
|
media stream is written to the DSP by calling `compress_write`.
|
||||||
|
|
||||||
|
### **`add_port`** and **`remove_port`**
|
||||||
|
|
||||||
|
Since dynamic ports aren't supported, these methods return a `ENOTSUP`.
|
||||||
|
|
||||||
|
## `pw-cat`
|
||||||
|
|
||||||
|
`pw-cat` was modified to support negotiation of encoded formats and passing the
|
||||||
|
encoded stream as is when linked to the compressed sink node.
|
||||||
|
|
||||||
|
## Deploying on hardware
|
||||||
|
|
||||||
|
Based on discussions with upstream compress offload maintainers, we chose a
|
||||||
|
Dragonboard 845c with the Qualcomm SDM845 SoC as our test platform.
|
||||||
|
|
||||||
|
For deploying Linux on Embedded devices, the tool of choice is Yocto.
|
||||||
|
Yocto is a build automation framework and cross-compile environment used to
|
||||||
|
create custom Linux distributions/board support packages for embedded devices.
|
||||||
|
|
||||||
|
Primary dependencies are
|
||||||
|
|
||||||
|
- tinycompress
|
||||||
|
|
||||||
|
- ffmpeg
|
||||||
|
|
||||||
|
- PipeWire
|
||||||
|
|
||||||
|
- WirePlumber
|
||||||
|
|
||||||
|
The `tinycompress` library is what provides the compressed offload API. It makes
|
||||||
|
`ioctl()` calls to the underlying kernel driver/sound subsystem.
|
||||||
|
|
||||||
|
`ffmpeg` is a dependency for the example `fcplay` utility provided by
|
||||||
|
`tinycompress`. It's also used in `pw-cat` to read basic metadata of the encoded
|
||||||
|
media. This is then used to determine and negotiate the format with the compressed
|
||||||
|
sink node.
|
||||||
|
|
||||||
|
`PipeWire` is where the compressed sink node would reside and `WirePlumber` acting
|
||||||
|
as the session manager for `PipeWire`.
|
||||||
|
|
||||||
|
Going into how Yocto works is beyond the scope of what can be covered in a blog
|
||||||
|
post. Basic Yocto project concepts can be found [here](https://docs.yoctoproject.org/overview-manual/concepts.html?highlight=meta+layer#yocto-project-concepts).
|
||||||
|
|
||||||
|
In Yocto speak, a custom
|
||||||
|
[meta layer](https://github.com/asymptotic-io/meta-asymptotic) was written.
|
||||||
|
|
||||||
|
Yocto makes it quite easy to build `autoconf` based projects. A new `tinycompress`
|
||||||
|
[bitbake recipe](https://github.com/asymptotic-io/meta-asymptotic/blob/master/recipes-multimedia/tinycompress/tinycompress.bb)
|
||||||
|
was written to build the latest sources from upstream and also include the
|
||||||
|
`fcplay` and `cplay` utilities for initial testing.
|
||||||
|
|
||||||
|
The existing PipeWire and WirePlumber recipes were modified to point to custom git
|
||||||
|
sources with minor changes to default settings included as part of the build.
|
||||||
|
|
||||||
|
## Updates since the original work
|
||||||
|
|
||||||
|
Since we completed the original patches, a number of changes have happened thanks
|
||||||
|
to the community (primarily Carlos Giani). These include:
|
||||||
|
|
||||||
|
- A device plugin for autodetecting compress nodes on the system
|
||||||
|
|
||||||
|
- Replacing `tinycompress` with an internal library to make all the requisite
|
||||||
|
`ioctl()`s
|
||||||
|
|
||||||
|
- Compressed format detection (which was previously waiting on
|
||||||
|
[an upstream API addition we implemented in `tinycompress`](https://github.com/alsa-project/tinycompress/pull/16)
|
||||||
|
|
||||||
|
## Future work
|
||||||
|
|
||||||
|
- Make compressed sink node provide clocking information. While the
|
||||||
|
API provides a method to retrieve the timestamp information, the relevant
|
||||||
|
timestamp fields seem to be not populated by the `q6asm-dai` driver.
|
||||||
|
|
||||||
|
- Validate other encoded formats. So far only MP3 and FLAC have been validated.
|
||||||
|
|
||||||
|
- May be the wider community can help test this on other hardware.
|
||||||
|
|
||||||
|
- Add capability to GStreamer plugin to work with compressed sink node. This
|
||||||
|
would also help in validating pause and resume.
|
276
content/posts/pipewire-parametric-autoeq.md
Normal file
|
@ -0,0 +1,276 @@
|
||||||
|
+++
|
||||||
|
title = "Writing a simple PipeWire parametric equalizer module"
|
||||||
|
date = 2024-05-07
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Motivation
|
||||||
|
|
||||||
|
When using headphones or in-ear monitors (IEMs), one might want to EQ their headphones or IEMs. Equalization or EQ is the process of adjusting the volume of different frequency bands in an audio signal. Some popular EQ software are EasyEffects on Linux and Equalizer APO on Windows. PipeWire supports EQ via the [filter-chain](https://docs.pipewire.org/page_module_filter_chain.html) module.
|
||||||
|
|
||||||
|
For an understanding of EQ, following resources might help.
|
||||||
|
|
||||||
|
- [The Headphone Show - EQ Basics](https://youtu.be/FRm9qTmQHKo?si=BFi0IH_XiCz1AxWa)
|
||||||
|
- [The Headphone Show - The Limits of EQ](https://www.youtube.com/watch?v=FD_s2s8Mw9k&t=0s)
|
||||||
|
- [Graphs 101 - How to Read Headphone Measurements](https://crinacle.com/2020/04/08/graphs-101-how-to-read-headphone-measurements/)
|
||||||
|
|
||||||
|
The basic idea is that there are some “standard” frequency response curves that might sound good to different individuals, and knowing the frequency response characteristics of a specific headphone/IEM model, you can apply a set of filters via an equalizer to achieve something close to the “standard” frequency response curve that sounds good to you.
|
||||||
|
|
||||||
|
Websites like [Squig](http://squig.link) or [autoeq.app](https://www.autoeq.app/) generate a file for parametric equalization for a given target, but this isn't a format that can be directly given to filter chain module. Squig is also useful for evaluating the frequency response curves of various in-ear monitors and headphones when making buying decisions.
|
||||||
|
|
||||||
|
An example of Parametric EQ generated from either AutoEQ or Squig looks like below.
|
||||||
|
|
||||||
|
```
|
||||||
|
Preamp: -6.8 dB
|
||||||
|
Filter 1: ON PK Fc 20 Hz Gain -1.3 dB Q 2.000
|
||||||
|
Filter 2: ON PK Fc 31 Hz Gain -7.0 dB Q 0.500
|
||||||
|
Filter 3: ON PK Fc 36 Hz Gain 0.7 dB Q 2.000
|
||||||
|
Filter 4: ON PK Fc 88 Hz Gain -0.4 dB Q 2.000
|
||||||
|
```
|
||||||
|
|
||||||
|
`Fc` is the frequency, `Gain` is the amount with which the signal gets boosted or attenuated around that frequency. `Q` factor controls the bandwidth around the frequency point. To be more precise, `Q` is the ratio of center frequency to bandwidth. If the center frequency is fixed, the bandwidth is inversely proportional to Q implying that as one raises the Q, the bandwidth is narrowed. Q is by far the most useful tool a parametric EQ offers, allowing one to attenuate or boost a narrow or wide range of frequencies within each EQ band.
|
||||||
|
|
||||||
|
If one wants to build a better intuition for this, playing around with the filter type and parameters [here](https://arachnoid.com/BiQuadDesigner/index.html), and seeing the effects on the frequency response helps. This linked article also goes into the basics of filters.
|
||||||
|
|
||||||
|
[***EasyEffects***](https://github.com/wwmm/easyeffects) allows importing such a file via it’s `Import APO` option, however, one might want to use an EQ input like this directly in PipeWire without having to resort to additional software like EasyEffects. However, during the course of testing, trying out multiple EQ is definitely much easier with EasyEffects GUI.
|
||||||
|
|
||||||
|
Now, this needs to be converted manually into something which [filter-chain](https://docs.pipewire.org/page_module_filter_chain.html) module can accept.
|
||||||
|
|
||||||
|
To simplify this, a simple PipeWire module is implemented which reads a parametric EQ text file like preceding and loads filter chain module while translating the inputs from the text file to what the filter chain module expects.
|
||||||
|
|
||||||
|
Before writing the PipeWire module, generate such a parametric equalizer file for a selected headphone and target using [AutoEq](https://github.com/jaakkopasanen/AutoEq). While the web interface at [autoeq.app](https://autoeq.app/) can be used, below section covers how to do this from the command line.
|
||||||
|
|
||||||
|
# Automatic headphone equalization
|
||||||
|
|
||||||
|
Installation instructions for AutoEq can be found [here](https://github.com/jaakkopasanen/AutoEq?tab=readme-ov-file#installing). Take the example of Beyerdynamic DT1990 headphone and Diffuse Field 5128 target. For brevity sake, what target curve to select and why isn't covered here.
|
||||||
|
|
||||||
|
Measurements can be found in [measurements](https://github.com/jaakkopasanen/AutoEq/tree/master/measurements) directory and targets can be found in [targets](https://github.com/jaakkopasanen/AutoEq/tree/master/targets) directory. Note that if measurements aren't available for the selected headphone, earphone, or in-ear monitor, AutoEq can't be used. `oratory1990` and `crinacle` are well known folks in the audiophile community who have provided measurements for various popular headphones and in-ears. `oratory1990`'s measurements are considered below.
|
||||||
|
|
||||||
|
Create a results directory `dt1990-results` and then execute the below command.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m autoeq --input-file="measurements/oratory1990/data/over-ear/Beyerdynamic DT 1990 (balanced earpads).csv" --output-dir="dt1990-results" --target="targets/Diffuse field 5128.csv" --parametric-eq --fs=44100,48000
|
||||||
|
```
|
||||||
|
|
||||||
|
The results directory `dt1990-results` has the below output.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
Beyerdynamic DT 1990 (balanced earpads).csv
|
||||||
|
Beyerdynamic DT 1990 (balanced earpads).png
|
||||||
|
Beyerdynamic DT 1990 (balanced earpads) GraphicEQ.txt
|
||||||
|
Beyerdynamic DT 1990 (balanced earpads) ParametricEQ.txt
|
||||||
|
README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
The `Beyerdynamic DT 1990 (balanced earpads) ParametricEQ.txt` file has the parametric equalizer configuration needed which can be given to the PipeWire module discussed next. `README` has some information and recommendation.
|
||||||
|
|
||||||
|
Similar process can be followed for in-ear monitors as well.
|
||||||
|
|
||||||
|
# Module
|
||||||
|
|
||||||
|
A module is a client in a shared library `.so` file which shares a PipeWire context with the loading entity. PipeWire context is an object which manages all locally available resources. See [here](https://docs.pipewire.org/group__pw__context.html#details).
|
||||||
|
|
||||||
|
A module is loaded when it's listed in a PipeWire configuration file. Module's entry point is the `pipewire__module_init` function.
|
||||||
|
|
||||||
|
# Writing the module
|
||||||
|
|
||||||
|
A PipeWire module needs to go into `src/modules` directory. The file is named `module-parametric-equalizer.c` and starts with the `pipewire__module_init` function.
|
||||||
|
|
||||||
|
This module primarily has to two tasks:
|
||||||
|
|
||||||
|
- Parse the provided equalizer configuration into what `filter-chain` module accepts
|
||||||
|
|
||||||
|
- Load the `filter-chain` module with these arguments
|
||||||
|
|
||||||
|
The focus is on these two tasks and ignore rest of the ceremony around writing the module.
|
||||||
|
|
||||||
|
# Parsing parametric equalizer configuration
|
||||||
|
|
||||||
|
Below is the parametric equalizer configuration that was generated in the second section of this post. This configuration is converted to match the module args.
|
||||||
|
|
||||||
|
```
|
||||||
|
Preamp: -6.0 dB
|
||||||
|
Filter 1: ON LSC Fc 105 Hz Gain -5.2 dB Q 0.70
|
||||||
|
Filter 2: ON PK Fc 206 Hz Gain -5.3 dB Q 0.51
|
||||||
|
Filter 3: ON PK Fc 4532 Hz Gain 5.5 dB Q 0.41
|
||||||
|
Filter 4: ON PK Fc 791 Hz Gain 2.0 dB Q 1.43
|
||||||
|
Filter 5: ON PK Fc 6829 Hz Gain -3.8 dB Q 2.04
|
||||||
|
Filter 6: ON HSC Fc 10000 Hz Gain 3.1 dB Q 0.70
|
||||||
|
Filter 7: ON PK Fc 8944 Hz Gain -1.4 dB Q 2.88
|
||||||
|
Filter 8: ON PK Fc 3751 Hz Gain -1.6 dB Q 5.97
|
||||||
|
Filter 9: ON PK Fc 4458 Hz Gain 1.5 dB Q 6.00
|
||||||
|
Filter 10: ON PK Fc 39 Hz Gain -0.2 dB Q 1.88
|
||||||
|
```
|
||||||
|
|
||||||
|
For every line read, a node entry like below is generated.
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
type = builtin
|
||||||
|
name = eq_band_1
|
||||||
|
label = bq_low/highshelf/peaking
|
||||||
|
control = { "Freq" = Fc "Q" = Q "Gain" = Gain }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
PipeWire repository contains a filter chain configuration [here](https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/daemon/filter-chain/sink-eq6.conf) which shows the structure of `args` it expects.
|
||||||
|
|
||||||
|
When a pre-amp gain is required, which is usually the case when applying EQ, the first EQ band needs to be modified to apply a `bq_highshelf` filter at frequency `0 Hz`with the provided negative gain. Pre-amp gain is always negative to offset the effect of possible clipping introduced by the amplification resulting from EQ. For the example preceding,
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
type = builtin,
|
||||||
|
name = eq_band_1,
|
||||||
|
label = bq_highshelf,
|
||||||
|
control = { Freq = 0, Gain = -6.0, Q = 1.0 },
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Similarly, for `Filter 1` this would be
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
type = builtin,
|
||||||
|
name = eq_band_2,
|
||||||
|
label = bq_lowshelf,
|
||||||
|
control = { Freq = 105, Gain = -5.2, Q = 0.7 },
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Similarly, for the other filters.
|
||||||
|
|
||||||
|
PipeWire as of this writing, doesn't have helpers to create module arguments in code. `fprintf` is used for constructing filter module arguments as a string.
|
||||||
|
|
||||||
|
First open a `memstream` ,
|
||||||
|
|
||||||
|
```c
|
||||||
|
char *args = NULL;
|
||||||
|
size_t size;
|
||||||
|
|
||||||
|
FILE *memstream = open_memstream(&args, &size)
|
||||||
|
```
|
||||||
|
|
||||||
|
Write a helper function which generates a `node` entry for the `nodes` array in the filter chain configuration.
|
||||||
|
|
||||||
|
```c
|
||||||
|
struct eq_node_param {
|
||||||
|
char filter_type[4];
|
||||||
|
char filter[4];
|
||||||
|
uint32_t freq;
|
||||||
|
float gain;
|
||||||
|
float q_fact;
|
||||||
|
};
|
||||||
|
|
||||||
|
void init_eq_node(FILE *f, const char *node_desc) {
|
||||||
|
fprintf(f, "{\n");
|
||||||
|
fprintf(f, "node.description = \"%s\"\n", node_desc);
|
||||||
|
fprintf(f, "media.name = \"%s\"\n", node_desc);
|
||||||
|
fprintf(f, "filter.graph = {\n");
|
||||||
|
fprintf(f, "nodes = [\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
void add_eq_node(FILE *f, struct eq_node_param *param, uint32_t eq_band_idx) {
|
||||||
|
fprintf(f, "{\n");
|
||||||
|
fprintf(f, "type = builtin\n");
|
||||||
|
fprintf(f, "name = eq_band_%d\n", eq_band_idx);
|
||||||
|
|
||||||
|
if (strcmp(param->filter_type, "PK") == 0) {
|
||||||
|
fprintf(f, "label = bq_peaking\n");
|
||||||
|
} else if (strcmp(param->filter_type, "LSC") == 0) {
|
||||||
|
fprintf(f, "label = bq_lowshelf\n");
|
||||||
|
} else if (strcmp(param->filter_type, "HSC") == 0) {
|
||||||
|
fprintf(f, "label = bq_highshelf\n");
|
||||||
|
} else {
|
||||||
|
fprintf(f, "label = bq_peaking\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
fprintf(f, "control = { \"Freq\" = %d \"Q\" = %f \"Gain\" = %f }\n", param->freq, param->q_fact, param->gain);
|
||||||
|
|
||||||
|
fprintf(f, "}\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
void end_eq_node(struct impl *impl, FILE *f, uint32_t number_of_nodes) {
|
||||||
|
fprintf(f, "]\n");
|
||||||
|
|
||||||
|
fprintf(f, "links = [\n");
|
||||||
|
for (uint32_t i = 1; i < number_of_nodes; i++) {
|
||||||
|
fprintf(f, "{ output = \"eq_band_%d:Out\" input = \"eq_band_%d:In\" }\n", i, i + 1);
|
||||||
|
}
|
||||||
|
fprintf(f, "]\n");
|
||||||
|
|
||||||
|
fprintf(f, "}\n");
|
||||||
|
fprintf(f, "audio.channels = %d\n", impl->channels);
|
||||||
|
fprintf(f, "audio.position = %s\n", impl->position);
|
||||||
|
|
||||||
|
fprintf(f, "capture.props = {\n");
|
||||||
|
fprintf(f, "node.name = \"effect_input.eq%d\"\n", number_of_nodes);
|
||||||
|
fprintf(f, "media.class = Audio/Sink\n");
|
||||||
|
fprintf(f, "}\n");
|
||||||
|
|
||||||
|
fprintf(f, "playback.props = {\n");
|
||||||
|
fprintf(f, "node.name = \"effect_output.eq%d\"\n", number_of_nodes);
|
||||||
|
fprintf(f, "node.passive = true\n");
|
||||||
|
fprintf(f, "}\n");
|
||||||
|
|
||||||
|
fprintf(f, "}\n");
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The parsing function relies on the preceding helpers and is now straight forward. Read line by line from the file stream using `getline` and use `sscanf` to parse the line itself and call these helpers.
|
||||||
|
|
||||||
|
```c
|
||||||
|
spa_zero(eq_param);
|
||||||
|
/* Check for Pre-amp gain */
|
||||||
|
nread = getline(&line, &len, f);
|
||||||
|
if (nread != -1 && sscanf(line, "%*s %6f %*s", &eq_param.gain) == 1) {
|
||||||
|
memcpy(eq_param.filter, "ON", 2);
|
||||||
|
memcpy(eq_param.filter_type, "HSC", 3);
|
||||||
|
eq_param.freq = 0;
|
||||||
|
eq_param.q_fact = 1.0;
|
||||||
|
|
||||||
|
add_eq_node(memstream, &eq_param, eq_band_idx);
|
||||||
|
|
||||||
|
eq_band_idx++;
|
||||||
|
eq_bands++;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Read the filter bands */
|
||||||
|
while ((nread = getline(&line, &len, f)) != -1) {
|
||||||
|
spa_zero(eq_param);
|
||||||
|
|
||||||
|
if (sscanf(line, "%*s %*d: %3s %3s %*s %5d %*s %*s %6f %*s %*c %6f", eq_param.filter, eq_param.filter_type, &eq_param.freq, &eq_param.gain, &eq_param.q_fact) == 5) {
|
||||||
|
if (strcmp(eq_param.filter, "ON") == 0) {
|
||||||
|
add_eq_node(memstream, &eq_param, eq_band_idx);
|
||||||
|
|
||||||
|
eq_band_idx++;
|
||||||
|
eq_bands++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
end_eq_node(impl, memstream, eq_bands);
|
||||||
|
|
||||||
|
fclose(memstream);
|
||||||
|
memstream = NULL;
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, `args` has a string representation of the parametric equalizer configuration which can now be passed while loading the filter chain module.
|
||||||
|
|
||||||
|
# Loading ***`filter-chain`***
|
||||||
|
|
||||||
|
The filter chain module can now be loaded with `args` from the previous step.
|
||||||
|
|
||||||
|
```c
|
||||||
|
struct pw_impl_module *eq_module;
|
||||||
|
eq_module = pw_context_load_module(impl->context,
|
||||||
|
"libpipewire-module-filter-chain",
|
||||||
|
args, NULL);
|
||||||
|
```
|
||||||
|
|
||||||
|
# Conclusion
|
||||||
|
|
||||||
|
The merge request for this upstream can be found [here](https://gitlab.freedesktop.org/pipewire/pipewire/-/merge_requests/2006).
|
||||||
|
|
||||||
|
The module allows one to leverage the built-in equalizer capabilities of PipeWire via it’s filter chain module without having to resort to writing the configuration by hand.
|
||||||
|
|
||||||
|
There are examples on writing [filter](https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/modules/module-example-filter.c), [sink](https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/modules/module-example-sink.c) and [source](https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/modules/module-example-source.c) modules in the PipeWire repository.
|
51
content/posts/rust-run.md
Normal file
|
@ -0,0 +1,51 @@
|
||||||
|
+++
|
||||||
|
title = "Rust Run Series at the Bangalore Rust meetup group"
|
||||||
|
date = 2018-12-02
|
||||||
|
+++
|
||||||
|
|
||||||
|
I took over the responsibility of being primary speaker at the [rustox](https://www.meetup.com/rustox/) meetup group in February 2018 from Ragini and Saifi who founded this group and used to host it every weekend. Have learned a lot from them.
|
||||||
|
|
||||||
|
Saifi had envisioned a series on Rust topics for beginners, which we decided to continue. The goals for 2018 by him are outlined [here](https://medium.com/@rustox/rust-bangalore-2018-goals-f1f16397cd47). However, it felt after a while catering to beginners alone and since there are always people coming on-board, no progress can be made beyond a certain point. There had already been a beginners series in 2017. With that in mind and after talking with Raj, who was now the organizer for sister group [FOSS Cafe](https://www.meetup.com/fosscafe/), we decided that there would be a fundamental topic one week and the next week some crate or project could be discussed. And this would alternate and keep on continuing. This could cater to both beginners as well as those who were interested in some more intermediate level topics.
|
||||||
|
|
||||||
|
With [Jyotsna Gupta](https://reps.mozilla.org/u/jenal/) also helping with the Mozilla and Rust swag, we decided to name this series as Rust Run. The following meetup events have been conducted as part of the series.
|
||||||
|
|
||||||
|
* [Ownership, References & Borrowing](https://www.meetup.com/rustox/events/250768637/)
|
||||||
|
|
||||||
|
* [Raft Consensus Algorithm](https://www.meetup.com/rustox/events/250768716/)
|
||||||
|
|
||||||
|
* [Generics and Traits](https://www.meetup.com/rustox/events/250768752/)
|
||||||
|
|
||||||
|
* [Futures](https://www.meetup.com/rustox/events/250768766/)
|
||||||
|
|
||||||
|
* [Error handling in Rust](https://www.meetup.com/rustox/events/250768799/)
|
||||||
|
|
||||||
|
* [tokio](https://www.meetup.com/rustox/events/250768815/)
|
||||||
|
|
||||||
|
* [Lifetimes in Rust](https://www.meetup.com/rustox/events/250768828/)
|
||||||
|
|
||||||
|
* [Smart Pointers in Rust](https://www.meetup.com/rustox/events/250768886/)
|
||||||
|
|
||||||
|
* [Rocket Web Framework](https://www.meetup.com/rustox/events/250768950/)
|
||||||
|
|
||||||
|
* [Concurrency in Rust](https://www.meetup.com/rustox/events/250768992/)
|
||||||
|
|
||||||
|
* [Diesel](https://www.meetup.com/rustox/events/250769067/)
|
||||||
|
|
||||||
|
* [Flatbuffers](https://www.meetup.com/rustox/events/254812229/)
|
||||||
|
|
||||||
|
A big thanks to [HackerEarth](https://www.hackerearth.com/) who have been forthcoming in providing the venue for meetups in last few months and also to [91SpringBoard](http://www.91springboard.com/) and [ThoughtFactory](https://www.axisbank.com/thoughtfactory/index.html) where a lot of the earlier meetups were conducted. Here are some pictures from the meetups conducted at HackerEarth.
|
||||||
|
|
||||||
|
From the meetup on tokio.
|
||||||
|
![](/tokio.jpg)
|
||||||
|
|
||||||
|
From the meetup on lifetimes.
|
||||||
|
![](/lifetimes.jpeg)
|
||||||
|
|
||||||
|
From the meetup on Rocket.
|
||||||
|
![](/rocket.jpeg)
|
||||||
|
|
||||||
|
From the meetup on Concurrency in Rust.
|
||||||
|
![](/concurrency.jpeg)
|
||||||
|
|
||||||
|
From the meetup on Smart Pointers.
|
||||||
|
![](/smartpointers.jpeg)
|
125
content/posts/tale-of-working-with-uio.md
Normal file
|
@ -0,0 +1,125 @@
|
||||||
|
+++
|
||||||
|
title = "A tale of working with Xilinx DisplayPort & UIO"
|
||||||
|
date = 2019-08-30
|
||||||
|
+++
|
||||||
|
|
||||||
|
At work, a DisplayPort IP from Xilinx was being used. Xilinx doesn't provide any driver for this. There is a [TX](https://www.xilinx.com/support/documentation/ip_documentation/v_dp_txss1/v2_0/pg299-v-dp-txss1.pdf) and [RX](https://www.xilinx.com/support/documentation/ip_documentation/v_dp_rxss1/v2_0/pg300-v-dp-rxss1.pdf).
|
||||||
|
|
||||||
|
Bare metal code support is provided, however, support was needed for Linux. Ignoring interrupts, it's easy to get this bare metal code to work on Linux. Xilinx's bare metal code at it's core uses [Xil_Out32](https://github.com/Xilinx/embeddedsw/blob/master/lib/bsp/standalone/src/common/xil_io.h#L219) and [Xil_In32](https://github.com/Xilinx/embeddedsw/blob/master/lib/bsp/standalone/src/common/xil_io.h#L147)
|
||||||
|
function for writing and reading to registers. The implementations for these can be replaced with mmap for accessing the registers. For DP TX side, doesn't need to handle interrupts and setting up the registers is enough. For RX, however, interrupts are needed to setup RX. For example, the link training for DP is initiated once a Training Pattern 1 (TP1) interrupt is detected.
|
||||||
|
|
||||||
|
Linux being a monolithic kernel, there is clear separation between kernel and user space. Interrupts can only be handled in kernel space. However, it was easier to use the ported bare metal code in user space and so the need to handle interrupts in user space. One writes a driver to do all this but since only the interrupt part had to be handled in kernel space, the UIO subsystem was needed.
|
||||||
|
|
||||||
|
Using the UIO subsystem, it's possible to handle the interrupts in kernel space while the rest like reading or writing to the registers can be done in user space. There's a [Userspace I/O Platform driver with generic IRQ handling code](https://elixir.bootlin.com/linux/latest/source/drivers/uio/uio_pdrv_genirq.c). Taking the example of Xilinx DisplayPort RX here. Colleague who works on FPGA side generates the FPGA firmware along with device trees which has entries as per the peripherals configured on FPGA side, for example, DP RX peripheral can be in the memory region 0x80004000 to 0x80006000. An interrupt is assigned based on how the FPGA Programmable Logic (PL) connects to the Processing System (PS). PL is the FPGA and PS is the ARM64 SoC.
|
||||||
|
|
||||||
|
The extended device tree entry looks like this. *reg* specifies the memory that can be mmaped in user space and accessed while the *interrupts* get used by the kernel code.
|
||||||
|
|
||||||
|
```C
|
||||||
|
&SUBBLOCK_DP_BASE_v_dp_rxss1_0 {
|
||||||
|
compatible = "dprxss-uio";
|
||||||
|
interrupt-parent = <&gic>;
|
||||||
|
interrupts = <0 92 4 0 92 4>;
|
||||||
|
reg = <0x0 0x80004000 0x0 0x2000>
|
||||||
|
status = "okay";
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To link the UIO platform driver to this, add the following to the *bootargs* environment variable in u-boot.
|
||||||
|
|
||||||
|
```C
|
||||||
|
uio_pdrv_genirq.of_id=dprxss-uio
|
||||||
|
```
|
||||||
|
|
||||||
|
There is a need to do this, since the compatible property for device tree isn't specified in the driver. See [here](https://elixir.bootlin.com/linux/latest/source/drivers/uio/uio_pdrv_genirq.c#L252). So it's a module parameter.
|
||||||
|
|
||||||
|
```C
|
||||||
|
static struct of_device_id uio_of_genirq_match[] = {
|
||||||
|
{ /* This is filled with module_parm */ },
|
||||||
|
{ /* Sentinel */ },
|
||||||
|
};
|
||||||
|
MODULE_DEVICE_TABLE(of, uio_of_genirq_match);
|
||||||
|
module_param_string(of_id, uio_of_genirq_match[0].compatible, 128, 0);
|
||||||
|
MODULE_PARM_DESC(of_id, "Openfirmware id of the device to be handled by uio");
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, a combination of poll and read can be used to wait for interrupts in user space. So all well and good, however, there are some caveats to know. Once an interrupt is handled in kernel code, it disables the interrupt.
|
||||||
|
|
||||||
|
```C
|
||||||
|
static irqreturn_t uio_pdrv_genirq_handler(int irq, struct uio_info *dev_info)
|
||||||
|
{
|
||||||
|
struct uio_pdrv_genirq_platdata *priv = dev_info->priv;
|
||||||
|
|
||||||
|
/* Just disable the interrupt in the interrupt controller, and
|
||||||
|
* remember the state so we can allow user space to enable it later.
|
||||||
|
*/
|
||||||
|
|
||||||
|
spin_lock(&priv->lock);
|
||||||
|
if (!__test_and_set_bit(UIO_IRQ_DISABLED, &priv->flags))
|
||||||
|
disable_irq_nosync(irq);
|
||||||
|
spin_unlock(&priv->lock);
|
||||||
|
|
||||||
|
return IRQ_HANDLED;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The interrupt re-enable logic is in the below function.
|
||||||
|
|
||||||
|
```c
|
||||||
|
static int uio_pdrv_genirq_irqcontrol(struct uio_info *dev_info, s32 irq_on)
|
||||||
|
{
|
||||||
|
struct uio_pdrv_genirq_platdata *priv = dev_info->priv;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
/* Allow user space to enable and disable the interrupt
|
||||||
|
* in the interrupt controller, but keep track of the
|
||||||
|
* state to prevent per-irq depth damage.
|
||||||
|
*
|
||||||
|
* Serialize this operation to support multiple tasks and concurrency
|
||||||
|
* with irq handler on SMP systems.
|
||||||
|
*/
|
||||||
|
|
||||||
|
spin_lock_irqsave(&priv->lock, flags);
|
||||||
|
if (irq_on) {
|
||||||
|
if (__test_and_clear_bit(UIO_IRQ_DISABLED, &priv->flags))
|
||||||
|
enable_irq(dev_info->irq);
|
||||||
|
} else {
|
||||||
|
if (!__test_and_set_bit(UIO_IRQ_DISABLED, &priv->flags))
|
||||||
|
disable_irq_nosync(dev_info->irq);
|
||||||
|
}
|
||||||
|
spin_unlock_irqrestore(&priv->lock, flags);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is called from [uio_write](https://elixir.bootlin.com/linux/latest/source/drivers/uio/uio.c#L648). And if you know how [file operations](https://linux-kernel-labs.github.io/master/labs/device_drivers.html) work, this `uio_write` is called when a *write* system call is issued with a file descriptor received from opening the */dev/uioX* node.
|
||||||
|
|
||||||
|
```C
|
||||||
|
static const struct file_operations uio_fops = {
|
||||||
|
.owner = THIS_MODULE,
|
||||||
|
.open = uio_open,
|
||||||
|
.release = uio_release,
|
||||||
|
.read = uio_read,
|
||||||
|
.write = uio_write,
|
||||||
|
.mmap = uio_mmap,
|
||||||
|
.poll = uio_poll,
|
||||||
|
.fasync = uio_fasync,
|
||||||
|
.llseek = noop_llseek,
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, here comes the problem. After the first interrupt, no more interrupts were being handled. So, basically the write call wasn't working to re-enable the interrupt. Putting print statements in `uio_write`, it could be seen that a call to the write function didn't result in invocation of `uio_write`.
|
||||||
|
|
||||||
|
After being perplexed and wasting 4-5 hours trying to figure out what's wrong, wrote a small piece of code outside the project workspace which opened `/dev/uioX` and then did a write. In this case, the prints from `uio_write` function which eventually called `uio_pdrv_genirq_irqcontrol` to enable the interrupt could be seen. So, something was wrong with the project setup.
|
||||||
|
|
||||||
|
Using neovim and ctags for code navigation, trying jump to definition on the write call, ended up in a `write.c` file. The first initial project setup was done by the FPGA engineer colleague since Xilinx SDK generates bare metal code samples based on the design done. Had not noticed this file before. It seemed to be an artifact of the code ported over from bare metal and had a write function as below.
|
||||||
|
|
||||||
|
```C
|
||||||
|
__attribute__((weak)) sint32 write(sint32 fd, char8 *buf, sint32 nbytes)
|
||||||
|
```
|
||||||
|
|
||||||
|
Being aware of the weak attribute from working in u-boot where it's used to define board specific functions over riding default ones. The GCC manual defines it as _The weak attribute causes the declaration to be emitted as a weak symbol rather than a global. This is primarily useful in defining library functions which can be overridden in user code_.
|
||||||
|
|
||||||
|
There was no other write function defined in the project. Ideally it should have been picked up from `glibc`. However, this wasn't happening. Didn't need that write implementation in `write.c` which was actually writing to a UART port, so after removal everything started working fine. DP link training was succeeding finally.
|
||||||
|
|
||||||
|
One can read more about UIO [here](https://www.kernel.org/doc/html/latest/driver-api/uio-howto.html).
|
52
content/posts/using-os-containers-for-openembedded-build.md
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
+++
|
||||||
|
title = "Using OS containers for OpenEmbedded build"
|
||||||
|
date = 2015-06-02
|
||||||
|
+++
|
||||||
|
|
||||||
|
As a user of Arch Linux on laptop and on the server which gets used for OpenEmbedded builds, OpenEmbedded doesn't exactly like rolling release distros. Somehow for whatever reason, current builds just fail. Remember facing the problem when using Arch and learning OpenEmbedded iin the early days which lead to the first OpenEmbedded post. Why? Not having the time to debug such a thing, there was another solution at hand.
|
||||||
|
|
||||||
|
The solution is to use a `chroot` jail or when having systemd as init, that would be resorting to the use of `systemd-nspawn`. So lets get on how to set up an environment which can be used for builds. Assuming one has yaourt installed,
|
||||||
|
|
||||||
|
```bash
|
||||||
|
yaourt -S debootstrap
|
||||||
|
mkdir ~/debian-root
|
||||||
|
sudo debootstrap --arch=amd64 --variant=minbase jessie ~/debian-root
|
||||||
|
sudo systemd-nspawn -D debian-root
|
||||||
|
```
|
||||||
|
|
||||||
|
The third command is gonna take a while. After running the last command, one would be inside the `chroot` jail as a root user. After that run the following.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt-get install update
|
||||||
|
apt-get install sudo
|
||||||
|
adduser -uid 1000 -gid 100 username
|
||||||
|
visudo
|
||||||
|
su username
|
||||||
|
```
|
||||||
|
|
||||||
|
Assume you know what to add after running `visudo`, basically add yourself to the `sudoers` file. To get the user id and group id for the `adduser` command, run "id" on host. The last command drops you in the home directory for the newly created user. Whenever wanting to work on OpenEmbedded, just use `systemd-nspawn` to log in to the Debian environment. This environment can be kept for as long one likes to maintain the build setup while the normal user space can be updated without fear of breaking the build. One can replicate the exact Debian environment by just copying the complete Debian root directory using something like `rsync` to any number of your machines.
|
||||||
|
|
||||||
|
Also do note that Debian isn't the only one you can use. One can do this for Fedora, Arch, and probably other distros too.
|
||||||
|
|
||||||
|
Output from the Debian root environment.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sanchayan@debian-root:~/oe-core/build$ uname -a
|
||||||
|
Linux debian-root 4.0.0-ck1-bfs-ck+ #8 SMP PREEMPT Wed Apr 29 11:35:34 IST 2015 x86_64 GNU/Linux
|
||||||
|
sanchayan@debian-root:~/oe-core/build$ pwd
|
||||||
|
/home/sanchayan/oe-core/build
|
||||||
|
```
|
||||||
|
|
||||||
|
Output of `machinectl` from the host.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
machinectl
|
||||||
|
MACHINE CLASS SERVICE
|
||||||
|
debian-root container nspawn
|
||||||
|
|
||||||
|
1 machines listed.
|
||||||
|
```
|
||||||
|
|
||||||
|
[systemd for Administrators](http://0pointer.net/blog/archives.html) series should be interesting to you.
|
||||||
|
|
||||||
|
Do have a look at the man pages for `systemd-nspawn` and `machinectl`.
|
40
content/resume.md
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
+++
|
||||||
|
title = "Resume"
|
||||||
|
weight = 20
|
||||||
|
template = "resume.html"
|
||||||
|
+++
|
||||||
|
|
||||||
|
# Work
|
||||||
|
|
||||||
|
- [asymptotic](https://asymptotic.io/): January 2020 - Current
|
||||||
|
- [Harman International](https://www.harman.com/): December 2017 - December 2019
|
||||||
|
- [Toradex](https://www.toradex.com/): January 2013 - December 2017
|
||||||
|
- [Godrej & Boyce - Security Solutions](https://www.godrej.com/godrej-security-solutions): July 2010 - January 2013
|
||||||
|
|
||||||
|
# Open source contributions
|
||||||
|
|
||||||
|
- [gst-plugins-rs](https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests?scope=all&state=all&author_username=SanchayanMaity)
|
||||||
|
- [GStreamer](https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests?scope=all&state=all&author_username=SanchayanMaity)
|
||||||
|
- [PipeWire](https://gitlab.freedesktop.org/pipewire/pipewire/-/merge_requests?scope=all&state=all&author_username=SanchayanMaity)
|
||||||
|
- [PulseAudio](https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/merge_requests?scope=all&state=all&author_username=SanchayanMaity)
|
||||||
|
- [Linux](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=author&q=Sanchayan+Maity)
|
||||||
|
- [u-boot](https://source.denx.de/u-boot/u-boot/-/commits/master?search=Sanchayan%20Maity)
|
||||||
|
|
||||||
|
# Talks
|
||||||
|
|
||||||
|
- [State of QUIC in GStreamer](https://gstconf.ubicast.tv/videos/state-of-quic-in-gstreamer/)
|
||||||
|
- [HLS master playlist management made easy](https://gstconf.ubicast.tv/videos/hls-master-playlist-management-made-easy/)
|
||||||
|
- [Bridging WebRTC and SIP using GStreamer & SIPjs](https://gstconf.ubicast.tv/videos/bridging-webrtc-and-sip-using-gstreamer-sipjs/)
|
||||||
|
- [What's next for Bluetooth in PulseAudio?](https://www.youtube.com/watch?v=aU8Hjt2h6GQ)
|
||||||
|
- [Using Rust and GStreamer for building multimedia pipelines](https://hasgeek.com/rustlang/rustlang-mini-conference/schedule/using-rust-for-building-multimedia-pipelines-using-gstreamer-UfW59v9esDeZut5mXUVXhy)
|
||||||
|
- [Free Monads](https://www.youtube.com/watch?v=fhu1UQel5eo)
|
||||||
|
- [Lens](https://www.youtube.com/watch?v=0sAvCCAz8DQ)
|
||||||
|
|
||||||
|
# Skills
|
||||||
|
|
||||||
|
- **Languages**: Rust, Haskell, C, TypeScript, JavaScript, Python
|
||||||
|
- **Technologies**: GStreamer, PipeWire, PulseAudio, Embedded Linux, u-boot
|
||||||
|
- **Build Systems**: Meson, OpenEmbedded, Yocto, buildroot
|
||||||
|
- **Processors**: Tegra 2, Tegra 3, iMX6, Vybrid, Xilinx Ultrascale
|
||||||
|
- **VCS**: git, jujutsu
|
||||||
|
- **Debuggers**: GDB
|
BIN
static/concurrency.jpeg
Normal file
After Width: | Height: | Size: 422 KiB |
BIN
static/lifetimes.jpeg
Normal file
After Width: | Height: | Size: 649 KiB |
BIN
static/playback2.jpg
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
static/rocket.jpeg
Normal file
After Width: | Height: | Size: 570 KiB |
BIN
static/roundedcorners.jpg
Normal file
After Width: | Height: | Size: 120 KiB |
BIN
static/smartpointers.jpeg
Normal file
After Width: | Height: | Size: 681 KiB |
BIN
static/tokio.jpg
Normal file
After Width: | Height: | Size: 185 KiB |
1
templates/about.html
Normal file
|
@ -0,0 +1 @@
|
||||||
|
{% extends "page.html" %}
|
11
templates/archive.html
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
{% extends "index.html" %}
|
||||||
|
|
||||||
|
{% for year, posts in section.pages | group_by(attribute="year") %}
|
||||||
|
<h2>{{ year }}</h2>
|
||||||
|
|
||||||
|
<ul>
|
||||||
|
{% for post in posts %}
|
||||||
|
<li><a href="{{ post.permalink }}">{{ post.title }}</a></li>
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
{% endfor %}
|
92
templates/base.html
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
{% import "post_macros.html" as post_macros %}
|
||||||
|
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta http-equiv="content-type" content="text/html; charset=utf-8">
|
||||||
|
|
||||||
|
<!-- Enable responsiveness on mobile devices-->
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
|
||||||
|
{% if page %}
|
||||||
|
{% if page.description %}
|
||||||
|
<meta name="description" content="{{ page.description }}" />
|
||||||
|
{% elif config.description %}
|
||||||
|
<meta name="description" content="{{ config.description }}" />
|
||||||
|
{% endif %}
|
||||||
|
{% elif config.description %}
|
||||||
|
<meta name="description" content="{{ config.description }}" />
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
<title>{% block title %}{{ config.title }}{% endblock title %}</title>
|
||||||
|
|
||||||
|
{% if config.generate_feed %}
|
||||||
|
<link rel="alternate" type={% if config.feed_filename=="atom.xml" %}"application/atom+xml"{% else
|
||||||
|
%}"application/rss+xml"{% endif %} title="RSS" href="{{ get_url(path=config.feed_filename) | safe }}">
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% block css %}
|
||||||
|
<link rel="stylesheet" href="{{ get_url(path='site.css', trailing_slash=false) | safe }}">
|
||||||
|
{% endblock css %}
|
||||||
|
|
||||||
|
{% block extra_head %}
|
||||||
|
{% endblock extra_head %}
|
||||||
|
</head>
|
||||||
|
|
||||||
|
<body class="hack dark main container">
|
||||||
|
{% block content %}
|
||||||
|
{% block header %}
|
||||||
|
{% if config.extra.after_dark_menu %}
|
||||||
|
<header class="nav-header">
|
||||||
|
<nav itemscope itemtype="http://schema.org/SiteNavigationElement" class="navbar">
|
||||||
|
<div class="nav-links">
|
||||||
|
{% for item in config.extra.after_dark_menu %}
|
||||||
|
<a itemprop="url"
|
||||||
|
class="{% if item.url | replace(from='$BASE_URL', to=config.base_url)==current_url %}active{% endif %}"
|
||||||
|
href="{{ item.url | safe | replace(from='$BASE_URL', to=config.base_url) | safe }}">
|
||||||
|
<span itemprop="name">{{ item.name }}</span></a>
|
||||||
|
{% endfor %}
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
{% if config.extra.enable_search %}
|
||||||
|
<div class="search-container">
|
||||||
|
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" class="search-icon">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round"
|
||||||
|
d="m21 21-5.197-5.197m0 0A7.5 7.5 0 1 0 5.196 5.196a7.5 7.5 0 0 0 10.607 10.607Z" />
|
||||||
|
</svg>
|
||||||
|
<input type="text" id="search" placeholder="Search...">
|
||||||
|
<div class="search-results">
|
||||||
|
<div class="search-results__items"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
</header>
|
||||||
|
{% endif %}
|
||||||
|
{% endblock header %}
|
||||||
|
|
||||||
|
<main>
|
||||||
|
{% if config.extra.after_dark_title %}
|
||||||
|
<header>
|
||||||
|
<h1>{{ config.extra.after_dark_title }}</h1>
|
||||||
|
</header>
|
||||||
|
{% endif %}
|
||||||
|
</main>
|
||||||
|
{% endblock content %}
|
||||||
|
|
||||||
|
<!-- optional scripts -->
|
||||||
|
{% block scripts %}
|
||||||
|
{% if config.extra.codeblock %}
|
||||||
|
<script src="{{ get_url(path='js/codeblock.js', trailing_slash=false) | safe }}"></script>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% include "partials/latex.html" %}
|
||||||
|
{% include "partials/search.html" %}
|
||||||
|
|
||||||
|
{% endblock scripts %}
|
||||||
|
</body>
|
||||||
|
|
||||||
|
</html>
|
||||||
|
|
2
templates/resume.html
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
{% extends "page.html" %}
|
||||||
|
|
43
templates/robots.txt
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
User-agent: AI2Bot
|
||||||
|
User-agent: Ai2Bot-Dolma
|
||||||
|
User-agent: Amazonbot
|
||||||
|
User-agent: anthropic-ai
|
||||||
|
User-agent: Applebot
|
||||||
|
User-agent: Applebot-Extended
|
||||||
|
User-agent: Bytespider
|
||||||
|
User-agent: CCBot
|
||||||
|
User-agent: ChatGPT-User
|
||||||
|
User-agent: Claude-Web
|
||||||
|
User-agent: ClaudeBot
|
||||||
|
User-agent: cohere-ai
|
||||||
|
User-agent: Diffbot
|
||||||
|
User-agent: DuckAssistBot
|
||||||
|
User-agent: FacebookBot
|
||||||
|
User-agent: FriendlyCrawler
|
||||||
|
User-agent: Google-Extended
|
||||||
|
User-agent: GoogleOther
|
||||||
|
User-agent: GoogleOther-Image
|
||||||
|
User-agent: GoogleOther-Video
|
||||||
|
User-agent: GPTBot
|
||||||
|
User-agent: iaskspider/2.0
|
||||||
|
User-agent: ICC-Crawler
|
||||||
|
User-agent: ImagesiftBot
|
||||||
|
User-agent: img2dataset
|
||||||
|
User-agent: ISSCyberRiskCrawler
|
||||||
|
User-agent: Kangaroo Bot
|
||||||
|
User-agent: Meta-ExternalAgent
|
||||||
|
User-agent: Meta-ExternalFetcher
|
||||||
|
User-agent: OAI-SearchBot
|
||||||
|
User-agent: omgili
|
||||||
|
User-agent: omgilibot
|
||||||
|
User-agent: PanguBot
|
||||||
|
User-agent: PerplexityBot
|
||||||
|
User-agent: PetalBot
|
||||||
|
User-agent: Scrapy
|
||||||
|
User-agent: Sidetrade indexer bot
|
||||||
|
User-agent: Timpibot
|
||||||
|
User-agent: VelenPublicWebCrawler
|
||||||
|
User-agent: Webzio-Extended
|
||||||
|
User-agent: YouBot
|
||||||
|
Allow: /
|
||||||
|
Sitemap: {{ get_url(path="sitemap.xml") }}
|
1
themes/after-dark
Submodule
|
@ -0,0 +1 @@
|
||||||
|
Subproject commit 1f32860b07b0ae2e71773dcb77e6228afd929e8d
|