Fix#20456
At some point during the 1.17 cycle abbreviated refishs to issue
branches started breaking. This is likely due serious inconsistencies in
our management of refs throughout Gitea - which is a bug needing to be
addressed in a different PR. (Likely more than one)
We should try to use non-abbreviated `fullref`s as much as possible.
That is where a user has inputted a abbreviated `refish` we should add
`refs/heads/` if it is `branch` etc. I know people keep writing and
merging PRs that remove prefixes from stored content but it is just
wrong and it keeps causing problems like this. We should only remove the
prefix at the time of
presentation as the prefix is the only way of knowing umambiguously and
permanently if the `ref` is referring to a `branch`, `tag` or `commit` /
`SHA`. We need to make it so that every ref has the appropriate prefix,
and probably also need to come up with some definitely unambiguous way
of storing `SHA`s if they're used in a `ref` or `refish` field. We must
not store a potentially
ambiguous `refish` as a `ref`. (Especially when referring a `tag` -
there is no reason why users cannot create a `branch` with the same
short name as a `tag` and vice versa and any attempt to prevent this
will fail. You can even create a `branch` and a
`tag` that matches the `SHA` pattern.)
To that end in order to fix this bug, when parsing issue templates check
the provided `Ref` (here a `refish` because almost all users do not know
or understand the subtly), if it does not start with `refs/` add the
`BranchPrefix` to it. This allows people to make their templates refer
to a `tag` but not to a `SHA` directly. (I don't think that is
particularly unreasonable but if people disagree I can make the `refish`
be checked to see if it matches the `SHA` pattern.)
Next we need to handle the issue links that are already written. The
links here are created with `git.RefURL`
Here we see there is a bug introduced in #17551 whereby the provided
`ref` argument can be double-escaped so we remove the incorrect external
escape. (The escape added in #17551 is in the right place -
unfortunately I missed that the calling function was doing the wrong
thing.)
Then within `RefURL()` we check if an unprefixed `ref` (therefore
potentially a `refish`) matches the `SHA` pattern before assuming that
is actually a `commit` - otherwise is assumed to be a `branch`. This
will handle most of the problem cases excepting the very unusual cases
where someone has deliberately written a `branch` to look like a `SHA1`.
But please if something is called a `ref` or interpreted as a `ref` make
it a full-ref before storing or using it. By all means if something is a
`branch` assume the prefix is removed but always add it back in if you
are using it as a `ref`. Stop storing abbreviated `branch` names and
`tag` names - which are `refish` as a `ref`. It will keep on causing
problems like this.
Fix#20456
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
GitHub migration tests will be skipped if the secret for the GitHub API
token hasn't been set.
This change should make all tests pass (or skip in the case of this one)
for anyone running the pipeline on their own infrastructure without
further action on their part.
Resolves https://github.com/go-gitea/gitea/issues/21739
Signed-off-by: Gary Moon <gary@garymoon.net>
The `getPullRequestPayloadInfo` function is widely used in many webhook,
it works well when PR is open or edit. But when we comment in PR review
panel (not PR panel), the comment content is not set as
`attachmentText`.
This commit set comment content as `attachmentText` when PR review, so
webhook could obtain this information via this function.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Fix#19513
This PR introduce a new db method `InTransaction(context.Context)`,
and also builtin check on `db.TxContext` and `db.WithTx`.
There is also a new method `db.AutoTx` has been introduced but could be used by other PRs.
`WithTx` will always open a new transaction, if a transaction exist in context, return an error.
`AutoTx` will try to open a new transaction if no transaction exist in context.
That means it will always enter a transaction if there is no error.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
The purpose of #18982 is to improve the SMTP mailer, but there were some
unrelated changes made to the SMTP auth in
d60c438694
This PR reverts these unrelated changes, fix#21744
Related #20471
This PR adds global quota limits for the package registry. Settings for
individual users/orgs can be added in a seperate PR using the settings
table.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Close https://github.com/go-gitea/gitea/issues/21640
Before: Gitea can create users like ".xxx" or "x..y", which is not
ideal, it's already a consensus that dot filenames have special
meanings, and `a..b` is a confusing name when doing cross repo compare.
After: stricter
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
_This is a different approach to #20267, I took the liberty of adapting
some parts, see below_
## Context
In some cases, a weebhook endpoint requires some kind of authentication.
The usual way is by sending a static `Authorization` header, with a
given token. For instance:
- Matrix expects a `Bearer <token>` (already implemented, by storing the
header cleartext in the metadata - which is buggy on retry #19872)
- TeamCity #18667
- Gitea instances #20267
- SourceHut https://man.sr.ht/graphql.md#authentication-strategies (this
is my actual personal need :)
## Proposed solution
Add a dedicated encrypt column to the webhook table (instead of storing
it as meta as proposed in #20267), so that it gets available for all
present and future hook types (especially the custom ones #19307).
This would also solve the buggy matrix retry #19872.
As a first step, I would recommend focusing on the backend logic and
improve the frontend at a later stage. For now the UI is a simple
`Authorization` field (which could be later customized with `Bearer` and
`Basic` switches):
![2022-08-23-142911](https://user-images.githubusercontent.com/3864879/186162483-5b721504-eef5-4932-812e-eb96a68494cc.png)
The header name is hard-coded, since I couldn't fine any usecase
justifying otherwise.
## Questions
- What do you think of this approach? @justusbunsi@Gusted@silverwind
- ~~How are the migrations generated? Do I have to manually create a new
file, or is there a command for that?~~
- ~~I started adding it to the API: should I complete it or should I
drop it? (I don't know how much the API is actually used)~~
## Done as well:
- add a migration for the existing matrix webhooks and remove the
`Authorization` logic there
_Closes #19872_
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: delvh <dev.lh@web.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
GitHub allows releases with target commitish `refs/heads/BRANCH`, which
then causes issues in Gitea after migration. This fix handles cases that
a branch already has a prefix.
Fixes#20317
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
A bug was introduced in #17865 where filepath.Join is used to join
putative unadopted repository owner and names together. This is
incorrect as these names are then used as repository names - which shoud
have the '/' separator. This means that adoption will not work on
Windows servers.
Fix#21632
Signed-off-by: Andrew Thornton <art27@cantab.net>
The OAuth spec [defines two types of
client](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1),
confidential and public. Previously Gitea assumed all clients to be
confidential.
> OAuth defines two client types, based on their ability to authenticate
securely with the authorization server (i.e., ability to
> maintain the confidentiality of their client credentials):
>
> confidential
> Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
> restricted access to the client credentials), or capable of secure
client authentication using other means.
>
> **public
> Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the resource
owner, such as an installed native application or a web browser-based
application), and incapable of secure client authentication via any
other means.**
>
> The client type designation is based on the authorization server's
definition of secure authentication and its acceptable exposure levels
of client credentials. The authorization server SHOULD NOT make
assumptions about the client type.
https://datatracker.ietf.org/doc/html/rfc8252#section-8.4
> Authorization servers MUST record the client type in the client
registration details in order to identify and process requests
accordingly.
Require PKCE for public clients:
https://datatracker.ietf.org/doc/html/rfc8252#section-8.1
> Authorization servers SHOULD reject authorization requests from native
apps that don't use PKCE by returning an error message
Fixes#21299
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Previously mentioning a user would link to its profile, regardless of
whether the user existed. This change tests if the user exists and only
if it does - a link to its profile is added.
* Fixes#3444
Signed-off-by: Yarden Shoham <hrsi88@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
When actions besides "delete" are performed on issues, the milestone
counter is updated. However, since deleting issues goes through a
different code path, the associated milestone's count wasn't being
updated, resulting in inaccurate counts until another issue in the same
milestone had a non-delete action performed on it.
I verified this change fixes the inaccurate counts using a local docker
build.
Fixes#21254
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
At the moment a repository reference is needed for webhooks. With the
upcoming package PR we need to send webhooks without a repository
reference. For example a package is uploaded to an organization. In
theory this enables the usage of webhooks for future user actions.
This PR removes the repository id from `HookTask` and changes how the
hooks are processed (see `services/webhook/deliver.go`). In a follow up
PR I want to remove the usage of the `UniqueQueue´ and replace it with a
normal queue because there is no reason to be unique.
Co-authored-by: 6543 <6543@obermui.de>
When a PR reviewer reviewed a file on a commit that was later gc'ed,
they would always get a `500` response from then on when loading the PR.
This PR simply ignores that error and instead marks all files as
unchanged.
This approach was chosen as the only feasible option without diving into
**a lot** of error handling.
Fixes#21392
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
A lot of our code is repeatedly testing if individual errors are
specific types of Not Exist errors. This is repetitative and unnecesary.
`Unwrap() error` provides a common way of labelling an error as a
NotExist error and we can/should use this.
This PR has chosen to use the common `io/fs` errors e.g.
`fs.ErrNotExist` for our errors. This is in some ways not completely
correct as these are not filesystem errors but it seems like a
reasonable thing to do and would allow us to simplify a lot of our code
to `errors.Is(err, fs.ErrNotExist)` instead of
`package.IsErr...NotExist(err)`
I am open to suggestions to use a different base error - perhaps
`models/db.ErrNotExist` if that would be felt to be better.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
For normal commits the notification url was wrong because oldCommitID is received from the shrinked commits list.
This PR moves the commits list shrinking after the oldCommitID assignment.
Fixes#21379
The commits are capped by `setting.UI.FeedMaxCommitNum` so
`len(commits)` is not the correct number. So this PR adds a new
`TotalCommits` field.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
When merge was changed to run in the background context, the db updates
were still running in request context. This means that the merge could
be successful but the db not be updated.
This PR changes both these to run in the hammer context, this is not
complete rollback protection but it's much better.
Fix#21332
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
We should only log CheckPath errors if they are not simply due to
context cancellation - and we should add a little more context to the
error message.
Fix#20709
Signed-off-by: Andrew Thornton <art27@cantab.net>
Close#20315 (fix the panic when parsing invalid input), Speed up #20231 (use ls-tree without size field)
Introduce ListEntriesRecursiveFast (ls-tree without size) and ListEntriesRecursiveWithSize (ls-tree with size)
Only load SECRET_KEY and INTERNAL_TOKEN if they exist.
Never write the config file if the keys do not exist, which was only a fallback for Gitea upgraded from < 1.5
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This is useful in scenarios where the reverse proxy may have knowledge
of user emails, but does not know about usernames set on gitea,
as in the feature request in #19948.
I tested this by setting up a fresh gitea install with one user `mhl`
and email `m.hasnain.lakhani@gmail.com`. I then created a private repo,
and configured gitea to allow reverse proxy authentication.
Via curl I confirmed that these two requests now work and return 200s:
curl http://localhost:3000/mhl/private -I --header "X-Webauth-User: mhl"
curl http://localhost:3000/mhl/private -I --header "X-Webauth-Email: m.hasnain.lakhani@gmail.com"
Before this commit, the second request did not work.
I also verified that if I provide an invalid email or user,
a 404 is correctly returned as before
Closes#19948
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
The behaviour of `PreventSurroundingPre` has changed in
https://github.com/alecthomas/chroma/pull/618 so that apparently it now
causes line wrapper tags to be no longer emitted, but we need some form
of indication to split the HTML into lines, so I did what
https://github.com/yuin/goldmark-highlighting/pull/33 did and added the
`nopWrapper`.
Maybe there are more elegant solutions but for some reason, just
splitting the HTML string on `\n` did not work.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Both allow only limited characters. If you input more, you will get a error
message. So it make sense to limit the characters of the input fields.
Slightly relax the MaxSize of repo's Description and Website
If you are create a new new branch while viewing file or directory, you
get redirected to the root of the repo. With this PR, you keep your
current path instead of getting redirected to the repo root.
The `go-licenses` make task introduced in #21034 is being run on make vendor
and occasionally causes an empty go-licenses file if the vendors need to
change. This should be moved to the generate task as it is a generated file.
Now because of this change we also need to split generation into two separate
steps:
1. `generate-backend`
2. `generate-frontend`
In the future it would probably be useful to make `generate-swagger` part of `generate-frontend` but it's not tolerated with our .drone.yml
Ref #21034
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>