When issue templates were moved into services in
def4956122, the code was also refactored
and simplified. Unfortunately, that simplification broke the
`/api/v1/{owner}/{repo}/issue_templates` route, because it was
previously using a helper function that ignored invalid templates, and
after the refactor, the function it called *always* returned non-nil as
the second return value. This, in turn, results in the aforementioned
end point always returning an internal server error.
This change restores the previous behaviour of ignoring invalid files
returned by `issue.GetTemplatesFromDefaultBranch`, and adds a few test
cases to exercise the endpoint.
Other users of `GetTemplatesFromDefaultBranch` already ignore the second
return value, or handle it correctly, so no changes are necessary there.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
Previously, the repo wiki was hardcoded to use `master` as its branch,
this change makes it possible to use `main` (or something else, governed
by `[repository].DEFAULT_BRANCH`, a setting that already exists and
defaults to `main`).
The way it is done is that a new column is added to the `repository`
table: `wiki_branch`. The migration will make existing repositories
default to `master`, for compatibility's sake, even if they don't have a
Wiki (because it's easier to do that). Newly created repositories will
default to `[repository].DEFAULT_BRANCH` instead.
The Wiki service was updated to use the branch name stored in the
database, and fall back to the default if it is empty.
Old repositories with Wikis using the older `master` branch will have
the option to do a one-time transition to `main`, available via the
repository settings in the "Danger Zone". This option will only be
available for repositories that have the internal wiki enabled, it is
not empty, and the wiki branch is not `[repository].DEFAULT_BRANCH`.
When migrating a repository with a Wiki, Forgejo will use the same
branch name for the wiki as the source repository did. If that's not the
same as the default, the option to normalize it will be available after
the migration's done.
Additionally, the `/api/v1/{owner}/{repo}` endpoint was updated: it will
now include the wiki branch name in `GET` requests, and allow changing
the wiki branch via `PATCH`.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
(cherry picked from commit d87c526d2a)
- When there's a succesful POST operation, it should return a 201 status
code (which is the status code for succesful created) and additionally
the created object.
- Currently for the `POST /repos/{owner}/{repo}/tags` endpoint an 200
status code was documented in the OpenAPI specification, while an 201
status code was actually being returned. In this case the code is
correct and the documented status code needs to be adjusted.
- Resolves#2200
(cherry picked from commit a2939116f5)
(cherry picked from commit 22cff41585)
(cherry picked from commit b23a7f27bb)
- The name could be conflucted with the `GET
/user/applications/oauth2/{id}` operation, as it only differed in a
single letter being uppercase. Change it to be
userGetOAuth2Application**s**, as that's also more accurate for this function.
- Resolves#2163
(cherry picked from commit 1891dac547)
(cherry picked from commit 68fceb9b7a)
(cherry picked from commit 7335d6de54)
- Document the correct content types for Git archives. Add code that
actually sets the correct application type for `.zip` and `.tar.gz`.
- When an action (POST/PUT/DELETE method) was successful, an 204 status
code should be returned instead of status code 200.
- Add and adjust integration testing.
- Resolves#2180
- Resolves#2181
(cherry picked from commit 6c8c4512b5)
(cherry picked from commit 3f74bcb14d)
(cherry picked from commit 6ed9057fd7)
Instead of repeating the tests that verify the ID of a comment
is related to the repository of the API endpoint, add the middleware
function commentAssignment() to assign ctx.Comment if the ID of the
comment is verified to be related to the repository.
There already are integration tests for cases of potential unrelated
comment IDs that cover some of the modified endpoints which covers the
commentAssignment() function logic.
* TestAPICommentReactions - GetIssueCommentReactions
* TestAPICommentReactions - PostIssueCommentReaction
* TestAPICommentReactions - DeleteIssueCommentReaction
* TestAPIEditComment - EditIssueComment
* TestAPIDeleteComment - DeleteIssueComment
* TestAPIGetCommentAttachment - GetIssueCommentAttachment
The other modified endpoints do not have tests to verify cases of
potential unrelated comment IDs. They no longer need to because they
no longer implement the logic to enforce this. They however all have
integration tests that verify the commentAssignment() they now rely on
does not introduce a regression.
* TestAPIGetComment - GetIssueComment
* TestAPIListCommentAttachments - ListIssueCommentAttachments
* TestAPICreateCommentAttachment - CreateIssueCommentAttachment
* TestAPIEditCommentAttachment - EditIssueCommentAttachment
* TestAPIDeleteCommentAttachment - DeleteIssueCommentAttachment
(cherry picked from commit d414376d74)
(cherry picked from commit 09db07aeae)
(cherry picked from commit f44830c3cb)
Conflicts:
modules/context/api.go
https://codeberg.org/forgejo/forgejo/pulls/2249
(cherry picked from commit 9d1bf7be15)
Refs: https://codeberg.org/forgejo/forgejo/issues/2109
(cherry picked from commit 8b4ba3dce7)
(cherry picked from commit 196edea0f9)
[GITEA] POST /repos/{owner}/{repo}/pulls/{index}/reviews/{id}/comments (squash) do not implicitly create a review
If a comment already exists in a review, the comment is added. If it
is the first comment added to a review, it will implicitly create a
new review instead of adding to the existing one.
The pull_service.CreateCodeComment function is responsibe for this
behavior and it will defer to createCodeComment once the review is
determined, either because it was found or because it was created.
Rename createCodeComment into CreateCodeCommentKnownReviewID to expose
it and change the API endpoint to use it instead. Since the review is
provided by the user and verified to exist already, there is no need
for the logic implemented by CreateCodeComment.
The tests are modified to remove the initial comment from the fixture
because it was creating the false positive. I was verified to fail
without this fix.
(cherry picked from commit 6a555996dc)
(cherry picked from commit b173a0ccee)
(cherry picked from commit 838ab9740a)
Expose the repository flags feature over the API, so the flags can be
managed by a site administrator without using the web API.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
(cherry picked from commit bac9f0225d)
(cherry picked from commit e7f5c1ba14)
(cherry picked from commit 95d9fe19cf)
(cherry picked from commit 7fc51991e4)
- Switch the supported schemas for the Swagger API around, such that
https is the first one listed. This ensures that when the Swagger API is
used it will default to the https schema, which is likely the schema you
want to use in the majority of the cases.
- Resolves#1895
BREAKING CHANGE NOTICE:
If you are using the Swagger API JSON directly to communicate with the
Forgejo API, the library you are using may be using the first schema
defined in the JSON file (e.g. https://code.forgejo.org/swagger.v1.json)
to construct the request url, this used to be `http` but has now changed
to `https`. This can cause failures if you want to send the swagger
request over `http` (and there is no HTTPS redirection configured).
(cherry picked from commit 81e5f43886)
(cherry picked from commit d847469ea2)
(cherry picked from commit 96e75e1d5c)
(cherry picked from commit 65baa64261)
(cherry picked from commit cd3e0a74e6)
(cherry picked from commit a3127e90b2)
(cherry picked from commit 2b22272dc5)
(cherry picked from commit 7363790592)
(cherry picked from commit 432b9a4451)
This field adds the possibility to set the update date when modifying
an issue through the API.
A 'NoAutoDate' in-memory field is added in the Issue struct.
If the update_at field is set, NoAutoDate is set to true and the
Issue's UpdatedUnix field is filled.
That information is passed down to the functions that actually updates
the database, which have been modified to not auto update dates if
requested.
A guard is added to the 'EditIssue' API call, to checks that the
udpate_at date is between the issue's creation date and the current
date (to avoid 'malicious' changes). It also limits the new feature
to project's owners and admins.
(cherry picked from commit c524d33402)
Add a SetIssueUpdateDate() function in services/issue.go
That function is used by some API calls to set the NoAutoDate and
UpdatedUnix fields of an Issue if an updated_at date is provided.
(cherry picked from commit f061caa655)
Add an updated_at field to the API calls related to Issue's Labels.
The update date is applied to the issue's comment created to inform
about the modification of the issue's labels.
(cherry picked from commit ea36cf80f5)
Add an updated_at field to the API call for issue's attachment creation
The update date is applied to the issue's comment created to inform
about the modification of the issue's content, and is set as the
asset creation date.
(cherry picked from commit 96150971ca)
Checking Issue changes, with and without providing an updated_at date
Those unit tests are added:
- TestAPIEditIssueWithAutoDate
- TestAPIEditIssueWithNoAutoDate
- TestAPIAddIssueLabelsWithAutoDate
- TestAPIAddIssueLabelsWithNoAutoDate
- TestAPICreateIssueAttachmentWithAutoDate
- TestAPICreateIssueAttachmentWithNoAutoDate
(cherry picked from commit 4926a5d7a2)
Add an updated_at field to the API call for issue's comment creation
The update date is used as the comment creation date, and is applied to
the issue as the update creation date.
(cherry picked from commit 76c8faecdc)
Add an updated_at field to the API call for issue's comment edition
The update date is used as the comment update date, and is applied to
the issue as an update date.
(cherry picked from commit cf787ad7fd)
Add an updated_at field to the API call for comment's attachment creation
The update date is applied to the comment, and is set as the asset
creation date.
(cherry picked from commit 1e4ff424d3)
Checking Comment changes, with and without providing an updated_at date
Those unit tests are added:
- TestAPICreateCommentWithAutoDate
- TestAPICreateCommentWithNoAutoDate
- TestAPIEditCommentWithAutoDate
- TestAPIEditCommentWithNoAutoDate
- TestAPICreateCommentAttachmentWithAutoDate
- TestAPICreateCommentAttachmentWithNoAutoDate
(cherry picked from commit da932152f1)
Pettier code to set the update time of comments
Now uses sess.AllCols().NoAutoToime().SetExpr("updated_unix", ...)
XORM is smart enough to compose one single SQL UPDATE which all
columns + updated_unix.
(cherry picked from commit 1f6a42808d)
Issue edition: Keep the max of the milestone and issue update dates.
When editing an issue via the API, an updated_at date can be provided.
If the EditIssue call changes the issue's milestone, the milestone's
update date is to be changed accordingly, but only with a greater
value.
This ensures that a milestone's update date is the max of all issue's
update dates.
(cherry picked from commit 8f22ea182e)
Rewrite the 'AutoDate' tests using subtests
Also add a test to check the permissions to set a date, and a test
to check update dates on milestones.
The tests related to 'AutoDate' are:
- TestAPIEditIssueAutoDate
- TestAPIAddIssueLabelsAutoDate
- TestAPIEditIssueMilestoneAutoDate
- TestAPICreateIssueAttachmentAutoDate
- TestAPICreateCommentAutoDate
- TestAPIEditCommentWithDate
- TestAPICreateCommentAttachmentAutoDate
(cherry picked from commit 961fd13c55)
(cherry picked from commit d52f4eea44)
(cherry picked from commit 3540ea2a43)
Conflicts:
services/issue/issue.go
https://codeberg.org/forgejo/forgejo/pulls/1415
(cherry picked from commit 56720ade00)
Conflicts:
routers/api/v1/repo/issue_label.go
https://codeberg.org/forgejo/forgejo/pulls/1462
(cherry picked from commit 47c78927d6)
(cherry picked from commit 2030f3b965)
(cherry picked from commit f02aeb7698)
Conflicts:
routers/api/v1/repo/issue_attachment.go
routers/api/v1/repo/issue_comment_attachment.go
https://codeberg.org/forgejo/forgejo/pulls/1575
(cherry picked from commit d072525b35)
(cherry picked from commit 8424d0ab3d)
(cherry picked from commit 5cc62caec7)
(cherry picked from commit d6300d5dcd)
[FEAT] allow setting the update date on issues and comments (squash) apply the 'update_at' value to the cross-ref comments (#1676)
[this is a follow-up to PR #764]
When a comment of issue A referencing issue B is added with a forced 'updated_at' date, that date has to be applied to the comment created in issue B.
-----
Comment:
While trying my 'RoundUp migration script', I found that this case was forgotten in PR #764 - my apologies...
I'll try to write a functional test, base on models/issues/issue_xref_test.go
Reviewed-on: https://codeberg.org/forgejo/forgejo/pulls/1676
Co-authored-by: fluzz <fluzz@freedroid.org>
Co-committed-by: fluzz <fluzz@freedroid.org>
(cherry picked from commit ac4f727f63)
(cherry picked from commit 5110476ee9)
(cherry picked from commit 77ba6be1da)
(cherry picked from commit 9c8337b5c4)
(cherry picked from commit 1d689eb686)
(cherry picked from commit 511c519c87)
(cherry picked from commit 2f0b4a8f61)
(cherry picked from commit fdd4da111c)
[FEAT] allow setting the update date on issues and comments (squash) do not use token= query param
See https://codeberg.org/forgejo/forgejo/commit/33439b733a
(cherry picked from commit c5139a75b9)
(cherry picked from commit c7b572c35d)
(cherry picked from commit aec7503ff6)
(cherry picked from commit 87c65f2a49)
(cherry picked from commit bd47ee33c2)
(cherry picked from commit f3dbd90a74)
Fixes#28660
Fixes an admin api bug related to `user.LoginSource`
Fixed `/user/emails` response not identical to GitHub api
This PR unifies the user update methods. The goal is to keep the logic
only at one place (having audit logs in mind). For example, do the
password checks only in one method not everywhere a password is updated.
After that PR is merged, the user creation should be next.
In #28691, schedule plans will be deleted when a repo's actions unit is
disabled. But when the unit is enabled, the schedule plans won't be
created again.
This PR fixes the bug. The schedule plans will be created again when the
actions unit is re-enabled
## Purpose
This is a refactor toward building an abstraction over managing git
repositories.
Afterwards, it does not matter anymore if they are stored on the local
disk or somewhere remote.
## What this PR changes
We used `git.OpenRepository` everywhere previously.
Now, we should split them into two distinct functions:
Firstly, there are temporary repositories which do not change:
```go
git.OpenRepository(ctx, diskPath)
```
Gitea managed repositories having a record in the database in the
`repository` table are moved into the new package `gitrepo`:
```go
gitrepo.OpenRepository(ctx, repo_model.Repo)
```
Why is `repo_model.Repository` the second parameter instead of file
path?
Because then we can easily adapt our repository storage strategy.
The repositories can be stored locally, however, they could just as well
be stored on a remote server.
## Further changes in other PRs
- A Git Command wrapper on package `gitrepo` could be created. i.e.
`NewCommand(ctx, repo_model.Repository, commands...)`. `git.RunOpts{Dir:
repo.RepoPath()}`, the directory should be empty before invoking this
method and it can be filled in the function only. #28940
- Remove the `RepoPath()`/`WikiPath()` functions to reduce the
possibility of mistakes.
---------
Co-authored-by: delvh <dev.lh@web.de>
Currently, the `updateMirror` function which update the mirror interval
and enable prune properties is only executed by the `Edit` function. But
it is only triggered if `opts.MirrorInterval` is not null, even if
`opts.EnablePrune` is not null.
With this patch, it is now possible to update the enable_prune property
with a patch request without modifying the mirror_interval.
## Example request with httpie
### Currently:
**Does nothing**
```bash
http PATCH https://gitea.your-server/api/v1/repos/myOrg/myRepo "enable_prune:=false" -A bearer -a $gitea_token
```
**Updates both properties**
```bash
http PATCH https://gitea.your-server/api/v1/repos/myOrg/myRepo "enable_prune:=false" "mirror_interval=10m" -A bearer -a $gitea_token
```
### With the patch
**Updates enable_prune only**
```bash
http PATCH https://gitea.your-server/api/v1/repos/myOrg/myRepo "enable_prune:=false" -A bearer -a $gitea_token
```
Sometimes you need to work on a feature which depends on another (unmerged) feature.
In this case, you may create a PR based on that feature instead of the main branch.
Currently, such PRs will be closed without the possibility to reopen in case the parent feature is merged and its branch is deleted.
Automatic target branch change make life a lot easier in such cases.
Github and Bitbucket behave in such way.
Example:
$PR_1$: main <- feature1
$PR_2$: feature1 <- feature2
Currently, merging $PR_1$ and deleting its branch leads to $PR_2$ being closed without the possibility to reopen.
This is both annoying and loses the review history when you open a new PR.
With this change, $PR_2$ will change its target branch to main ($PR_2$: main <- feature2) after $PR_1$ has been merged and its branch has been deleted.
This behavior is enabled by default but can be disabled.
For security reasons, this target branch change will not be executed when merging PRs targeting another repo.
Fixes#27062Fixes#18408
---------
Co-authored-by: Denys Konovalov <kontakt@denyskon.de>
Co-authored-by: delvh <dev.lh@web.de>
Fixes#27114.
* In Gitea 1.12 (#9532), a "dismiss stale approvals" branch protection
setting was introduced, for ignoring stale reviews when verifying the
approval count of a pull request.
* In Gitea 1.14 (#12674), the "dismiss review" feature was added.
* This caused confusion with users (#25858), as "dismiss" now means 2
different things.
* In Gitea 1.20 (#25882), the behavior of the "dismiss stale approvals"
branch protection was modified to actually dismiss the stale review.
For some users this new behavior of dismissing the stale reviews is not
desirable.
So this PR reintroduces the old behavior as a new "ignore stale
approvals" branch protection setting.
---------
Co-authored-by: delvh <dev.lh@web.de>
Fix#28157
This PR fix the possible bugs about actions schedule.
## The Changes
- Move `UpdateRepositoryUnit` and `SetRepoDefaultBranch` from models to
service layer
- Remove schedules plan from database and cancel waiting & running
schedules tasks in this repository when actions unit has been disabled
or global disabled.
- Remove schedules plan from database and cancel waiting & running
schedules tasks in this repository when default branch changed.
Fix#27722Fix#27357Fix#25837
1. Fix the typo `BlockingByDependenciesNotPermitted`, which causes the
`not permitted message` not to show. The correct one is `Blocking` or
`BlockedBy`
2. Rewrite the perm check. The perm check uses a very tricky way to
avoid duplicate checks for a slice of issues, which is confusing. In
fact, it's also the reason causing the bug. It uses `lastRepoID` and
`lastPerm` to avoid duplicate checks, but forgets to assign the
`lastPerm` at the end of the code block. So I rewrote this to avoid this
trick.
![I U1AT{GNFY3
1HZ`6L{(2L](https://github.com/go-gitea/gitea/assets/70063547/79acd02a-a567-4316-ae0d-11c6461becf1)
3. It also reuses the `blocks` slice, which is even more confusing. So I
rewrote this too.
![UARFPXRGGZQFB7J$2`R}5_R](https://github.com/go-gitea/gitea/assets/70063547/f21cff0f-d9ac-4ce4-ae4d-adffc98ecd99)
Introduce the new generic deletion methods
- `func DeleteByID[T any](ctx context.Context, id int64) (int64, error)`
- `func DeleteByIDs[T any](ctx context.Context, ids ...int64) error`
- `func Delete[T any](ctx context.Context, opts FindOptions) (int64,
error)`
So, we no longer need any specific deletion method and can just use
the generic ones instead.
Replacement of #28450Closes#28450
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The CORS code has been unmaintained for long time, and the behavior is
not correct.
This PR tries to improve it. The key point is written as comment in
code. And add more tests.
Fix#28515Fix#27642Fix#17098
Nowadays, cache will be used on almost everywhere of Gitea and it cannot
be disabled, otherwise some features will become unaviable.
Then I think we can just remove the option for cache enable. That means
cache cannot be disabled.
But of course, we can still use cache configuration to set how should
Gitea use the cache.
The 4 functions are duplicated, especially as interface methods. I think
we just need to keep `MustID` the only one and remove other 3.
```
MustID(b []byte) ObjectID
MustIDFromString(s string) ObjectID
NewID(b []byte) (ObjectID, error)
NewIDFromString(s string) (ObjectID, error)
```
Introduced the new interfrace method `ComputeHash` which will replace
the interface `HasherInterface`. Now we don't need to keep two
interfaces.
Reintroduced `git.NewIDFromString` and `git.MustIDFromString`. The new
function will detect the hash length to decide which objectformat of it.
If it's 40, then it's SHA1. If it's 64, then it's SHA256. This will be
right if the commitID is a full one. So the parameter should be always a
full commit id.
@AdamMajer Please review.
- Modify the `Password` field in `CreateUserOption` struct to remove the
`Required` tag
- Update the `v1_json.tmpl` template to include the `email` field and
remove the `password` field
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
- Remove `ObjectFormatID`
- Remove function `ObjectFormatFromID`.
- Use `Sha1ObjectFormat` directly but not a pointer because it's an
empty struct.
- Store `ObjectFormatName` in `repository` struct
Refactor Hash interfaces and centralize hash function. This will allow
easier introduction of different hash function later on.
This forms the "no-op" part of the SHA256 enablement patch.
## Changes
- Add deprecation warning to `Token` and `AccessToken` authentication
methods in swagger.
- Add deprecation warning header to API response. Example:
```
HTTP/1.1 200 OK
...
Warning: token and access_token API authentication is deprecated
...
```
- Add setting `DISABLE_QUERY_AUTH_TOKEN` to reject query string auth
tokens entirely. Default is `false`
## Next steps
- `DISABLE_QUERY_AUTH_TOKEN` should be true in a subsequent release and
the methods should be removed in swagger
- `DISABLE_QUERY_AUTH_TOKEN` should be removed and the implementation of
the auth methods in question should be removed
## Open questions
- Should there be further changes to the swagger documentation?
Deprecation is not yet supported for security definitions (coming in
[OpenAPI Spec version
3.2.0](https://github.com/OAI/OpenAPI-Specification/issues/2506))
- Should the API router logger sanitize urls that use `token` or
`access_token`? (This is obviously an insufficient solution on its own)
---------
Co-authored-by: delvh <dev.lh@web.de>
Fix#28056
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
Fixes#27819
We have support for two factor logins with the normal web login and with
basic auth. For basic auth the two factor check was implemented at three
different places and you need to know that this check is necessary. This
PR moves the check into the basic auth itself.