General information

This section contains some general information, tips and considerations about developing or contributing to Kadi4Mat.

Useful tools

Kadi CLI

The Kadi command line interface (CLI) contains various useful tools and utilities as part of mulitple subcommands and is available automatically after installing the package. An overview over all commands can be obtained by simply running:

kadi

The Kadi CLI ensures that each subcommand runs inside the context of the application, which is why it always needs access to the correct Kadi4Mat environment and, if applicable, configuration file (see also the manual development installation). For this reason, some subcommands are simply wrappers over existing ones provided by other libraries, making their use in certain scenarios or environments easier. See also cli.

virtualenvwrapper

virtualenvwrapper is an extension to the Virtualenv tool and can be used to manage and switch between multiple virtual environments more easily. The tool can be installed globally via pip while not having any virtual environment currently active:

pip3 install virtualenvwrapper

Afterwards, some environment variables have to be set. Generally, a suitable place for them is the ~/.bashrc file. An example could look like the following:

export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
export WORKON_HOME=${HOME}/.venvs
source ${HOME}/.local/bin/virtualenvwrapper.sh

Please refer to the official documentation about their meaning as well as other possible variables that can be used, as their values differ by system and personal preference.

EditorConfig

For general editor settings related to indentation, maximum line length and line endings, the settings in the .editorconfig file can be applied. This file can be used in combination with a text editor or IDE that supports it. For more information, take a look at the EditorConfig documentation.

pre-commit

pre-commit is a framework for managing and maintaining multi-language pre-commit hooks, which get executed each time git commit is run. The tool itself should be installed already. The hooks listed in .pre-commit-config.yaml can be installed by simply running:

pre-commit install

The hooks can also be run manually on all versioned and indexed files using:

pre-commit run -a

The versions of all hooks can be updated automatically by running:

pre-commit autoupdate

black

black is a code formatter which is used throughout all Python code in the project. The tool itself should be installed already and can be applied on one or multiple files using:

black <path>

Besides running black on the command line, there are also various integrations available for different text editors and IDEs. black is also part of the pre-commit hooks. As such, it will run automatically on each commit or when running the pre-commit hooks manually.

isort

isort is another kind of code formatter with focus on sorting and grouping import statements throughout all Python code in the project. The tool itself should be installed already and can be applied on one or multiple files using:

isort <path>

isort will automatically use the configuration specified in the [tool.isort] section inside pyproject.toml. Similar to black, various integrations are available for different text editors and IDEs. Furthermore, isort is part of the pre-commit hooks and will run automatically on each commit or when running the pre-commit hooks manually.

autoflake

autoflake is a tool mainly used to help with the common case of removing unused import statements. The tool itself should be installed already and can be applied on one or multiple files (via the -r flag) using:

autoflake -i -r <path>

autoflake will automatically use the configuration specified in the [tool.autoflake] section inside pyproject.toml. As it also is part of the pre-commit hooks, there is usually no need to run it manually. However, it may be necessary to exclude certain files or imports, which can be achieved with one of the options shown in the following example:

# autoflake: skip_file

import unused_import  # noqa

Pylint

Pylint is a static code analysis tool for Python and should already be installed as well. It can be used on the command line to aid with detecting some common programming or style mistakes, even if not using an IDE that already does that. For example, linting the whole kadi package can be done by running the following command:

pylint kadi

Pylint will automatically use the configuration specified in the [tool.pylint.*] sections inside pyproject.toml. Sometimes, there might be certain code that should never be checked for various things. Using specific comments, one can instruct Pylint to skip such code, e.g. the following line will not raise a message for an unused import statement:

import something  # pylint: disable=unused-import

ESLint

ESLint is a linter and basic code formatter which is used for all JavaScript code throughout the project, including any code snippets inside script tags and Vue.js components. It should be already installed and can be applied on the whole kadi folder using the eslint script exposed by the npm command. Note that npm needs access to the package.json file, see also Managing frontend dependencies.

npm run eslint kadi

The configuration of ESlint can be found inside .eslintrc.js. Besides running ESlint on the command line, there are also various integrations available for different text editors and IDEs. Some files also contain code that should never be checked for certain things. Using specific comments again, one can instruct ESLint to skip such code, e.g. the following will suppress errors for unused variables in the specified function:

/* eslint-disable no-unused-vars */
function foo(a) {}
/* eslint-enable no-unused-vars */
// eslint-disable-next-line no-unused-vars
function foo(a) {}

ESLint is also part of the pre-commit hooks. As such, it will run automatically on each commit or when running the pre-commit hooks manually.

Backend development

Prototyping Python code

For quick prototyping or testing certain functionality, the Kadi shell can prove useful:

kadi shell
>>> app           # Imported automatically
<Kadi 'kadi.app'>
>>> dir()         # Get an overview of all automatically imported names

This shell is almost exactly the same as a regular interactive Python shell (or as the Flask shell), except that it ensures that a Flask application context is pushed automatically. Additionally, all database model classes and various other names are imported automatically.

Simulating sending emails

To simulate sending emails without using an actual SMTP server, the following command can be used to simply print the emails on the terminal instead using a debugging SMTP server:

kadi utils smtpd

The development configuration of Kadi4Mat already includes the correct values in order to make use of this debugging server, so no further changes should be necessary.

Adjusting or adding database models

For managing incremental changes to the database schema, potentially including existing data, Alembic is used via Flask-Migrate. These tools enable migration scripts to be generated, each script corresponding to the necessary database revisions, allowing another developer or system administrator to run the script and get the same changes in their database.

When adding a new model or adjusting an existing one, a new migration script has to be created to perform the necessary upgrades (see also migrations). To automatically generate such a script, the following command can be used:

kadi migrations migrate -m "Add some new table"

Note

When adding a new model, it needs to be imported somewhere in order for the tools to find it, as otherwise no changes might be detected. Furthermore, the current database schema always needs to be up to date in accordance to the latest migration script for the command to work.

The resulting code of the migration script should be checked and adjusted accordingly, as not all changes to the models may be detected automatically, such as new check constraints. Additionally, further steps may be necessary to migrate any existing data as well when adjusting existing models. Afterwards, the database can be upgraded by running the following command:

kadi migrations upgrade

When making further changes to a model after applying a newly generated migration script during development, it is usually best to recreate the script rather than creating another one. However, before deleting the old script, make sure to downgrade the database, as otherwise it may end up in an inconsistent state in regards to the revisions:

kadi migrations downgrade

Managing dependencies

All Python dependencies are currently specified via the pyproject.toml file, which lists all direct or major dependencies used in the project. All package versions are pinned in order to ensure installations that are (mostly) deterministic. In order to check for new package versions, the following helper script that is included in the Kadi4Mat source code can be used:

${HOME}/workspace/kadi/bin/check_requirements.py

Especially in case of major updates, any updated dependencies should always be checked for potentially breaking changes beforehand and for any issues that may arise in the application after the update.

Frontend development

Writing frontend code

To process and package all frontend assets, including Vue.js components, into individual JavaScript bundles runnable in a browser context, webpack is used. The main webpack configuration can be found in webpack.config.js, while a few other relevant global settings can be found in kadi/assets/scripts/main.js, which defines the main entry point for webpack. When writing frontend code, it is necessary to run the following in a separate terminal:

kadi assets watch

This way, changes to existing files will be detected and the resulting bundles in kadi/static/dist will be rebuilt automatically. When adding new files, the command might have to be restarted to pick them up, depending on which directory the files reside in. See also assets and static.

Managing dependencies

All frontend dependencies are managed using npm, the package manager of Node.js. The corresponding npm command uses the dependencies and configuration options as specified in the package.json file, so it must always be run somewhere inside the application’s root directory. Additionally, a package-lock.json file is generated automatically each time the package.json file is updated by npm to ensure deterministic installations. In order to install a new dependency, the following command can be used.

npm install <package>

This will add the new dependency to package.json automatically. In order to check all existing dependencies for updates, the following command can be used:

npm outdated

The outdated packages may be shown in different colors, depending on how each package is specified in package.json and on the magnitude of the update in accordance with Semantic Versioning. To apply any updates, one of the following commands can be used:

npm update                  # Automatically update all packages with compatible versions
npm install <package>@x.y.z # Install version x.y.z (e.g. a new major version) of a package

Especially in case of major updates, any updated dependencies should always be checked for potentially breaking changes beforehand and for any issues that may arise in the application after the update.

Security audits

npm offers built-in functionality to check all installed packages and subpackages for known vulnerabilities. As sometimes certain vulnerabilities may not be relevant for various reasons, e.g. false positives or the affected package is only used during development, it is possible to ignore certain vulnerabilities using audit-ci via the audit-ci.json file.

The modified security audit can then be run using the following command. Note that npm needs access to the package.json file, see also Managing frontend dependencies.

npm run audit

Developing under WSL

On Windows, it is possible to develop Kadi4Mat using the Windows Subsystem for Linux (WSL), which allows one to run a Linux environment without the need for a separate virtual machine or dual booting. This chapter focuses on some WSL-specific differences to the regular development installation instructions.

Note

Some of the instructions explained in this chapter may require the use of WSL 2 and/or Windows 11.

systemd

As the installation instructions make use of systemd for managing background services, it is recommended to setup systemd in WSL as well, if not already the case. Doing so should only require adding the following lines to the file /etc/wsl.conf (which might have to be created first) within the WSL and restarting it afterwards:

[boot]
systemd=true

Development environments

When choosing a development environment to use in combination with WSL, the easiest option is Visual Studio Code (VS Code) in combination with its WSL extension, as explained in the official documentation. This allows one to keep working under Windows, while the code itself and all additional tools stay and/or run within the WSL.

Tip

Using other development environments or text editors is possible as well, but will usually require setting up an X server on the Windows side and configuring it to be used by the WSL.

Common issues

The Flask dev server and/or the webpack watcher are not picking up changes

Both the Flask development server and webpack use inotify to efficiently watch for any file changes to automatically restart the server/rebuild the asset bundles. The number of file watches that inotify can use may be limited by the operating system by default. In that case, the limit can be increased permanently by running:

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

This amount should be fine in most cases. However, note that this might use more (unswappable) kernel memory than before.

Code running inside a background task is not updated

When modifying Python code that is run as part of a background task via Celery and the Celery worker itself is already running, it needs to be restarted to pick up any new changes.

There is currently no built-in way to automatically restart the worker on code changes. However, the code that is actually executed in the task should generally be runnable outside of a task as well, so testing it independently is preferred before testing the Celery integration.

Database migrations do not work correctly

Especially when working on multiple branches with differing migration scripts, managing database migrations might lead to various kinds of issues, sometimes without printing any helpful error message at all when using the Kadi CLI.

The best way to fix these kinds of issues is to first identify the current revision that is used by the database, e.g. by using the Kadi CLI:

kadi migrations current

If no revision identifier is printed, it is also possible to directly query the database:

sudo -Hiu postgres psql -c "SELECT * FROM alembic_version" kadi

For this revision identifier, a corresponding migration script should exist somewhere, which needs to be present to upgrade or downgrade the database to the desired version. If this is not an option, e.g. if the script was deleted already, it might be easier to simply recreate the database, as resetting it via the Kadi CLI likely won’t work with a broken migration script chain:

sudo -Hiu postgres dropdb kadi
sudo -Hiu postgres createdb -O kadi -E utf-8 -T template0 kadi

Afterwards, the database can be initialized again using one of the following commands:

kadi db init        # Initialize the database
kadi db sample-data # Initialize the database and setup some sample users and resources