67fa52dedb
The previous commit laid out the foundation of the quota engine, this one builds on top of it, and implements the actual enforcement. Enforcement happens at the route decoration level, whenever possible. In case of the API, when over quota, a 413 error is returned, with an appropriate JSON payload. In case of web routes, a 413 HTML page is rendered with similar information. This implementation is for a **soft quota**: quota usage is checked before an operation is to be performed, and the operation is *only* denied if the user is already over quota. This makes it possible to go over quota, but has the significant advantage of being practically implementable within the current Forgejo architecture. The goal of enforcement is to deny actions that can make the user go over quota, and allow the rest. As such, deleting things should - in almost all cases - be possible. A prime exemption is deleting files via the web ui: that creates a new commit, which in turn increases repo size, thus, is denied if the user is over quota. Limitations ----------- Because we generally work at a route decorator level, and rarely look *into* the operation itself, `size:repos:public` and `size:repos:private` are not enforced at this level, the engine enforces against `size:repos:all`. This will be improved in the future. AGit does not play very well with this system, because AGit PRs count toward the repo they're opened against, while in the GitHub-style fork + pull model, it counts against the fork. This too, can be improved in the future. There's very little done on the UI side to guard against going over quota. What this patch implements, is enforcement, not prevention. The UI will still let you *try* operations that *will* result in a denial. Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu> |
||
---|---|---|
.. | ||
alpine | ||
cargo | ||
chef | ||
composer | ||
conan | ||
conda | ||
container | ||
cran | ||
debian | ||
generic | ||
goproxy | ||
helm | ||
helper | ||
maven | ||
npm | ||
nuget | ||
pub | ||
pypi | ||
rpm | ||
rubygems | ||
swift | ||
vagrant | ||
api.go | ||
README.md |
Gitea Package Registry
This document gives a brief overview how the package registry is organized in code.
Structure
The package registry code is divided into multiple modules to split the functionality and make code reuse possible.
Module | Description |
---|---|
models/packages |
Common methods and models used by all registry types |
models/packages/<type> |
Methods used by specific registry type. There should be no need to use type specific models. |
modules/packages |
Common methods and types used by multiple registry types |
modules/packages/<type> |
Registry type specific methods and types (e.g. metadata extraction of package files) |
routers/api/packages |
Route definitions for all registry types |
routers/api/packages/<type> |
Route implementation for a specific registry type |
services/packages |
Helper methods used by registry types to handle common tasks like package creation and deletion in routers |
services/packages/<type> |
Registry type specific methods used by routers and services |
Models
Every package registry implementation uses the same underlying models:
Model | Description |
---|---|
Package |
The root of a package providing values fixed for every version (e.g. the package name) |
PackageVersion |
A version of a package containing metadata (e.g. the package description) |
PackageFile |
A file of a package describing its content (e.g. file name) |
PackageBlob |
The content of a file (may be shared by multiple files) |
PackageProperty |
Additional properties attached to Package , PackageVersion or PackageFile (e.g. used if metadata is needed for routing) |
The following diagram shows the relationship between the models:
Package <1---*> PackageVersion <1---*> PackageFile <*---1> PackageBlob
Adding a new package registry type
Before adding a new package registry type have a look at the existing implementation to get an impression of how it could work.
Most registry types offer endpoints to retrieve the metadata, upload and download package files.
The upload endpoint is often the heavy part because it must validate the uploaded blob, extract metadata and create the models.
The methods to validate and extract the metadata should be added in the modules/packages/<type>
package.
If the upload is valid the methods in services/packages
allow to store the upload and create the corresponding models.
It depends if the registry type allows multiple files per package version which method should be called:
CreatePackageAndAddFile
: error if package version already existsCreatePackageOrAddFileToExisting
: error if file already existsAddFileToExistingPackage
: error if package version does not exist or file already exists
services/packages
also contains helper methods to download a file or to remove a package version.
There are no helper methods for metadata endpoints because they are very type specific.