When re-retrieving hook tasks from the DB double check if they have not
been delivered in the meantime. Further ensure that tasks are marked as
delivered when they are being delivered.
In addition:
* Improve the error reporting and make sure that the webhook task
population script runs in a separate goroutine.
* Only get hook task IDs out of the DB instead of the whole task when
repopulating the queue
* When repopulating the queue make the DB request paged
Ref #17940
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Unfortunately #21549 changed the name of Testcases without changing
their associated fixture directories.
Fix#21854
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR adds a context parameter to a bunch of methods. Some helper
`xxxCtx()` methods got replaced with the normal name now.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- It's possible that the `user_redirect` table contains a user id that
no longer exists.
- Delete a user redirect upon deleting the user.
- Add a check for these dangling user redirects to check-db-consistency.
The doctor check `storages` currently only checks the attachment
storage. This PR adds some basic garbage collection functionality for
the other types of storage.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Fix#19513
This PR introduce a new db method `InTransaction(context.Context)`,
and also builtin check on `db.TxContext` and `db.WithTx`.
There is also a new method `db.AutoTx` has been introduced but could be used by other PRs.
`WithTx` will always open a new transaction, if a transaction exist in context, return an error.
`AutoTx` will try to open a new transaction if no transaction exist in context.
That means it will always enter a transaction if there is no error.
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
Related #20471
This PR adds global quota limits for the package registry. Settings for
individual users/orgs can be added in a seperate PR using the settings
table.
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Close https://github.com/go-gitea/gitea/issues/21640
Before: Gitea can create users like ".xxx" or "x..y", which is not
ideal, it's already a consensus that dot filenames have special
meanings, and `a..b` is a confusing name when doing cross repo compare.
After: stricter
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: delvh <dev.lh@web.de>
_This is a different approach to #20267, I took the liberty of adapting
some parts, see below_
## Context
In some cases, a weebhook endpoint requires some kind of authentication.
The usual way is by sending a static `Authorization` header, with a
given token. For instance:
- Matrix expects a `Bearer <token>` (already implemented, by storing the
header cleartext in the metadata - which is buggy on retry #19872)
- TeamCity #18667
- Gitea instances #20267
- SourceHut https://man.sr.ht/graphql.md#authentication-strategies (this
is my actual personal need :)
## Proposed solution
Add a dedicated encrypt column to the webhook table (instead of storing
it as meta as proposed in #20267), so that it gets available for all
present and future hook types (especially the custom ones #19307).
This would also solve the buggy matrix retry #19872.
As a first step, I would recommend focusing on the backend logic and
improve the frontend at a later stage. For now the UI is a simple
`Authorization` field (which could be later customized with `Bearer` and
`Basic` switches):
![2022-08-23-142911](https://user-images.githubusercontent.com/3864879/186162483-5b721504-eef5-4932-812e-eb96a68494cc.png)
The header name is hard-coded, since I couldn't fine any usecase
justifying otherwise.
## Questions
- What do you think of this approach? @justusbunsi @Gusted @silverwind
- ~~How are the migrations generated? Do I have to manually create a new
file, or is there a command for that?~~
- ~~I started adding it to the API: should I complete it or should I
drop it? (I don't know how much the API is actually used)~~
## Done as well:
- add a migration for the existing matrix webhooks and remove the
`Authorization` logic there
_Closes #19872_
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: delvh <dev.lh@web.de>
I found myself wondering whether a PR I scheduled for automerge was
actually merged. It was, but I didn't receive a mail notification for it
- that makes sense considering I am the doer and usually don't want to
receive such notifications. But ideally I want to receive a notification
when a PR was merged because I scheduled it for automerge.
This PR implements exactly that.
The implementation works, but I wonder if there's a way to avoid passing
the "This PR was automerged" state down so much. I tried solving this
via the database (checking if there's an automerge scheduled for this PR
when sending the notification) but that did not work reliably, probably
because sending the notification happens async and the entry might have
already been deleted. My implementation might be the most
straightforward but maybe not the most elegant.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
The OAuth spec [defines two types of
client](https://datatracker.ietf.org/doc/html/rfc6749#section-2.1),
confidential and public. Previously Gitea assumed all clients to be
confidential.
> OAuth defines two client types, based on their ability to authenticate
securely with the authorization server (i.e., ability to
> maintain the confidentiality of their client credentials):
>
> confidential
> Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
> restricted access to the client credentials), or capable of secure
client authentication using other means.
>
> **public
> Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the resource
owner, such as an installed native application or a web browser-based
application), and incapable of secure client authentication via any
other means.**
>
> The client type designation is based on the authorization server's
definition of secure authentication and its acceptable exposure levels
of client credentials. The authorization server SHOULD NOT make
assumptions about the client type.
https://datatracker.ietf.org/doc/html/rfc8252#section-8.4
> Authorization servers MUST record the client type in the client
registration details in order to identify and process requests
accordingly.
Require PKCE for public clients:
https://datatracker.ietf.org/doc/html/rfc8252#section-8.1
> Authorization servers SHOULD reject authorization requests from native
apps that don't use PKCE by returning an error message
Fixes#21299
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
When actions besides "delete" are performed on issues, the milestone
counter is updated. However, since deleting issues goes through a
different code path, the associated milestone's count wasn't being
updated, resulting in inaccurate counts until another issue in the same
milestone had a non-delete action performed on it.
I verified this change fixes the inaccurate counts using a local docker
build.
Fixes#21254
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
At the moment a repository reference is needed for webhooks. With the
upcoming package PR we need to send webhooks without a repository
reference. For example a package is uploaded to an organization. In
theory this enables the usage of webhooks for future user actions.
This PR removes the repository id from `HookTask` and changes how the
hooks are processed (see `services/webhook/deliver.go`). In a follow up
PR I want to remove the usage of the `UniqueQueue´ and replace it with a
normal queue because there is no reason to be unique.
Co-authored-by: 6543 <6543@obermui.de>
A lot of our code is repeatedly testing if individual errors are
specific types of Not Exist errors. This is repetitative and unnecesary.
`Unwrap() error` provides a common way of labelling an error as a
NotExist error and we can/should use this.
This PR has chosen to use the common `io/fs` errors e.g.
`fs.ErrNotExist` for our errors. This is in some ways not completely
correct as these are not filesystem errors but it seems like a
reasonable thing to do and would allow us to simplify a lot of our code
to `errors.Is(err, fs.ErrNotExist)` instead of
`package.IsErr...NotExist(err)`
I am open to suggestions to use a different base error - perhaps
`models/db.ErrNotExist` if that would be felt to be better.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
depends on #18871
Added some api integration tests to help testing of #18798.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Related:
* #21362
This PR uses a general and stable method to generate resource index (eg:
Issue Index, PR Index)
If the code looks good, I can add more tests
ps: please skip the diff, only have a look at the new code. It's
entirely re-written.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
For security reasons, all e-mail addresses starting with
non-alphanumeric characters were rejected. This is too broad and rejects
perfectly valid e-mail addresses. Only leading hyphens should be
rejected -- in all other cases e-mail address specification should
follow RFC 5322.
Co-authored-by: Andreas Fischer <_@ndreas.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
- Currently `repository.Num{Issues,Pulls}` weren't checked and could
become out-of-consistency. Adds these two checks to `CheckRepoStats`.
- Fix incorrect SQL query for `repository.NumClosedPulls`, the check
should be for `repo_num_pulls`.
- Reference: https://codeberg.org/Codeberg/Community/issues/696
Adds the settings pages to create OAuth2 apps also to the org settings
and allows to create apps for orgs.
Refactoring: the oauth2 related templates are shared for
instance-wide/org/user, and the backend code uses `OAuth2CommonHandlers`
to share code for instance-wide/org/user.
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#21250
Related #20414
Conan packages don't have to follow SemVer.
The migration fixes the setting for all existing Conan and Generic
(#20414) packages.
Adds GitHub-like pages to view watched repos and subscribed issues/PRs
This is my second try to fix this, but it is better than the first since
it doesn't uses a filter option which could be slow when accessing
`/issues` or `/pulls` and it shows both pulls and issues (the first try
is #17053).
Closes#16111
Replaces and closes#17053
![Screenshot](https://user-images.githubusercontent.com/80460567/134782937-3112f7da-425a-45b6-9511-5c9695aee896.png)
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This adds an api endpoint `/files` to PRs that allows to get a list of changed files.
built upon #18228, reviews there are included
closes https://github.com/go-gitea/gitea/issues/654
Co-authored-by: Anton Bracke <anton@ju60.de>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Fixes#21206
If user and viewer are equal the method should return true.
Also the common organization check was wrong as `count` can never be
less then 0.
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
There is a mistake in the batched delete comments part of DeleteUser which causes some comments to not be deleted
The code incorrectly updates the `start` of the limit clause resulting in most comments not being deleted.
```go
if err = e.Where("type=? AND poster_id=?", issues_model.CommentTypeComment, u.ID).Limit(batchSize, start).Find(&comments); err != nil {
```
should be:
```go
if err = e.Where("type=? AND poster_id=?", issues_model.CommentTypeComment, u.ID).Limit(batchSize, 0).Find(&comments); err != nil {
```
Co-authored-by: zeripath <art27@cantab.net>
Add support for triggering webhook notifications on wiki changes.
This PR contains frontend and backend for webhook notifications on wiki actions (create a new page, rename a page, edit a page and delete a page). The frontend got a new checkbox under the Custom Event -> Repository Events section. There is only one checkbox for create/edit/rename/delete actions, because it makes no sense to separate it and others like releases or packages follow the same schema.
![image](https://user-images.githubusercontent.com/121972/177018803-26851196-831f-4fde-9a4c-9e639b0e0d6b.png)
The actions itself are separated, so that different notifications will be executed (with the "action" field). All the webhook receivers implement the new interface method (Wiki) and the corresponding tests.
When implementing this, I encounter a little bug on editing a wiki page. Creating and editing a wiki page is technically the same action and will be handled by the ```updateWikiPage``` function. But the function need to know if it is a new wiki page or just a change. This distinction is done by the ```action``` parameter, but this will not be sent by the frontend (on form submit). This PR will fix this by adding the ```action``` parameter with the values ```_new``` or ```_edit```, which will be used by the ```updateWikiPage``` function.
I've done integration tests with matrix and gitea (http).
![image](https://user-images.githubusercontent.com/121972/177018795-eb5cdc01-9ba3-483e-a6b7-ed0e313a71fb.png)
Fix#16457
Signed-off-by: Aaron Fischer <mail@aaron-fischer.net>
A testing cleanup.
This pull request replaces `os.MkdirTemp` with `t.TempDir`. We can use the `T.TempDir` function from the `testing` package to create temporary directory. The directory created by `T.TempDir` is automatically removed when the test and all its subtests complete.
This saves us at least 2 lines (error check, and cleanup) on every instance, or in some cases adds cleanup that we forgot.
Reference: https://pkg.go.dev/testing#T.TempDir
```go
func TestFoo(t *testing.T) {
// before
tmpDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
// now
tmpDir := t.TempDir()
}
```
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When migrating add several more important sanity checks:
* SHAs must be SHAs
* Refs must be valid Refs
* URLs must be reasonable
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <matti@mdranta.net>
In #21031 we have discovered that on very big tables postgres will use a
search involving the sort term in preference to the restrictive index.
Therefore we add another index for postgres and update the original migration.
Fix#21031
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix hard-coded timeout and error panic in API archive download endpoint
This commit updates the `GET /api/v1/repos/{owner}/{repo}/archive/{archive}`
endpoint which prior to this PR had a couple of issues.
1. The endpoint had a hard-coded 20s timeout for the archiver to complete after
which a 500 (Internal Server Error) was returned to client. For a scripted
API client there was no clear way of telling that the operation timed out and
that it should retry.
2. Whenever the timeout _did occur_, the code used to panic. This was caused by
the API endpoint "delegating" to the same call path as the web, which uses a
slightly different way of reporting errors (HTML rather than JSON for
example).
More specifically, `api/v1/repo/file.go#GetArchive` just called through to
`web/repo/repo.go#Download`, which expects the `Context` to have a `Render`
field set, but which is `nil` for API calls. Hence, a `nil` pointer error.
The code addresses (1) by dropping the hard-coded timeout. Instead, any
timeout/cancelation on the incoming `Context` is used.
The code addresses (2) by updating the API endpoint to use a separate call path
for the API-triggered archive download. This avoids producing HTML-errors on
errors (it now produces JSON errors).
Signed-off-by: Peter Gardfjäll <peter.gardfjall.work@gmail.com>
Adds a new option to only show relevant repo's on the explore page, for bigger Gitea instances like Codeberg this is a nice option to enable to make the explore page more populated with unique and "high" quality repo's. A note is shown that the results are filtered and have the possibility to see the unfiltered results.
Co-authored-by: vednoc <vednoc@protonmail.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: 6543 <6543@obermui.de>
Unfortunately some keys are too big to fix within the 65535 limit of TEXT on MySQL
this causes issues with these large keys.
Therefore increase these fields to MEDIUMTEXT.
Fix#20894
Signed-off-by: Andrew Thornton <art27@cantab.net>
- Currently the function takes in the `UserID` option, but isn't being
used within the SQL query. This patch fixes that by checking that only
teams are being returned that the user belongs to.
Fix#20829
Co-authored-by: delvh <dev.lh@web.de>
Whilst looking at #20840 I noticed that the Mirrors data doesn't appear
to be being used therefore we can remove this and in fact none of the
related code is used elsewhere so it can also be removed.
Related #20840
Related #20804
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
In MirrorRepositoryList.loadAttributes there is some code to load the Mirror entries
from the database. This assumes that every Repository which has IsMirror set has
a Mirror associated in the DB. This association is incorrect in the case of
Mirror repository under creation when there is no Mirror entry in the DB until
completion.
Unfortunately LoadAttributes makes this incorrect assumption and presumes that a
Mirror will always be loaded. This then causes a panic.
This PR simply double checks if there a Mirror before attempting to link back to
its Repo. Unfortunately it should be expected that there may be other cases where
this incorrect assumption causes further problems.
Fix#20804
Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* merge `CheckLFSVersion` into `InitFull` (renamed from `InitWithSyncOnce`)
* remove the `Once` during git init, no data-race now
* for doctor sub-commands, `InitFull` should only be called in initialization stage
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Added support for Pub packages.
* Update docs/content/doc/packages/overview.en-us.md
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gergely Nagy <algernon@users.noreply.github.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
- Add a new push mirror to specific repository
- Sync now ( send all the changes to the configured push mirrors )
- Get list of all push mirrors of a repository
- Get a push mirror by ID
- Delete push mirror by ID
Signed-off-by: Mohamed Sekour <mohamed.sekour@exfo.com>
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
WebAuthn have updated their specification to set the maximum size of the
CredentialID to 1023 bytes. This is somewhat larger than our current
size and therefore we need to migrate.
The PR changes the struct to add CredentialIDBytes and migrates the CredentialID string
to the bytes field before another migration drops the old CredentialID field. Another migration
renames this field back.
Fix#20457
Signed-off-by: Andrew Thornton <art27@cantab.net>
Sometimes users want to receive email notifications of messages they create or reply to,
Added an option to personal preferences to allow users to choose
Closes#20149
Was looking into the visibility checks because I need them for something different and noticed the checks are more complicated than they have to be.
The rule is just: user/org is visible if
- The doer is a member of the org, regardless of the org visibility
- The doer is not restricted and the user/org is public or limited
When viewing a subdirectory and the latest commit to that directory in
the table, the commit status icon incorrectly showed the status of the
HEAD commit instead of the latest for that directory.
* Fixes issue #19603 (Not able to merge commit in PR when branches content is same, but different commit id)
* fill HeadCommitID in PullRequest
* compare real commits ID as check for merging
* based on @zeripath patch in #19738
- Currently when a Team has read access to a organization's non-private
repository, their access won't be stored in the database. This caused
issue for code that rely on read access being stored. So from now-on if
we see that the repository is owned by a organization don't increase the
minMode to write permission.
- Resolves#20083
Support synchronizing with the push mirrors whenever new commits are pushed or synced from pull mirror.
Related Issues: #18220
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Add git.HOME_PATH
* add legacy file check
* Apply suggestions from code review
Co-authored-by: zeripath <art27@cantab.net>
* pass env GNUPGHOME to git command, move the existing .gitconfig to new home, make the fix for 1.17rc more clear.
* set git.HOME_PATH for docker images to default HOME
* Revert "set git.HOME_PATH for docker images to default HOME"
This reverts commit f120101ddc.
* force Gitea to use a stable GNUPGHOME directory
* extra check to ensure only process dir or symlink for legacy files
* refactor variable name
* The legacy dir check (for 1.17-rc1) could be removed with 1.18 release, since users should have upgraded from 1.17-rc to 1.17-stable
* Update modules/git/git.go
Co-authored-by: Steven Kriegler <61625851+justusbunsi@users.noreply.github.com>
* remove initFixGitHome117rc
* Update git.go
* Update docs/content/doc/advanced/config-cheat-sheet.en-us.md
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Steven Kriegler <61625851+justusbunsi@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Users who are following or being followed by a user should only be
displayed if the viewing user can see them.
Signed-off-by: Andrew Thornton <art27@cantab.net>
The setting `DEFAULT_SHOW_FULL_NAME` promises to use the user's full name everywhere it can be used.
Unfortunately the function `*user_model.User.ShortName()` currently uses the `.Name` instead - but this should also use the `.FullName()`.
Therefore we should make `*user_model.User.ShortName()` base its pre-shortened name on the `.FullName()` function.
Unforunately the previous PR #20035 created indices that were not helpful
for SQLite. This PR adjusts these after testing using the try.gitea.io db.
Fix#20129
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Check if project has the same repository id with issue when assign project to issue
* Check if issue's repository id match project's repository id
* Add more permission checking
* Remove invalid argument
* Fix errors
* Add generic check
* Remove duplicated check
* Return error + add check for new issues
* Apply suggestions from code review
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: 6543 <6543@obermui.de>
This PR adds a new manager command to switch on SQL logging and to turn it off.
```
gitea manager logging log-sql
gitea manager logging log-sql --off
```
Signed-off-by: Andrew Thornton <art27@cantab.net>
There appears to be a strange bug whereby the comment_id index can sometimes be missed
or missing from the action table despite the sync2 that should create it in the earlier
part of this migration. However, looking through the code for Sync2 there is no need
for this pre-code to exist and Sync2 should drop/create the indices as necessary.
I think therefore we should simplify the migration to simply be Sync2.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* go.mod: add go-fed/{httpsig,activity/pub,activity/streams} dependency
go get github.com/go-fed/activity/streams@master
go get github.com/go-fed/activity/pub@master
go get github.com/go-fed/httpsig@master
* activitypub: implement /api/v1/activitypub/user/{username} (#14186)
Return informations regarding a Person (as defined in ActivityStreams
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-person).
Refs: https://github.com/go-gitea/gitea/issues/14186
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: add the public key to Person (#14186)
Refs: https://github.com/go-gitea/gitea/issues/14186
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: go-fed conformant Clock instance
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: signing http client
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: implement the ReqSignature middleware
Signed-off-by: Loïc Dachary <loic@dachary.org>
* activitypub: hack_16834
Signed-off-by: Loïc Dachary <loic@dachary.org>
* Fix CI checks-backend errors with go mod tidy
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Change 2021 to 2022, properly format package imports
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Run make fmt and make generate-swagger
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Use Gitea JSON library, add assert for pkp
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Run make fmt again, fix err var redeclaration
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Remove LogSQL from ActivityPub person test
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Assert if json.Unmarshal succeeds
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Cleanup, handle invalid usernames for ActivityPub person GET request
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Rename hack_16834 to user_settings
Signed-off-by: Anthony Wang <ta180m@pm.me>
* Use the httplib module instead of http for GET requests
* Clean up whitespace with make fmt
* Use time.RFC1123 and make the http.Client proxy-aware
* Check if digest algo is supported in setting module
* Clean up some variable declarations
* Remove unneeded copy
* Use system timezone instead of setting.DefaultUILocation
* Use named constant for httpsigExpirationTime
* Make pubKey IRI #main-key instead of /#main-key
* Move /#main-key to #main-key in tests
* Implemented Webfinger endpoint.
* Add visible check.
* Add user profile as alias.
* Add actor IRI and remote interaction URL to WebFinger response
* fmt
* Fix lint errors
* Use go-ap instead of go-fed
* Run go mod tidy to fix missing modules in go.mod and go.sum
* make fmt
* Convert remaining code to go-ap
* Clean up go.sum
* Fix JSON unmarshall error
* Fix CI errors by adding @context to Person() and making sure types match
* Correctly decode JSON in api_activitypub_person_test.go
* Force CI rerun
* Fix TestActivityPubPersonInbox segfault
* Fix lint error
* Use @mariusor's suggestions for idiomatic go-ap usage
* Correctly add inbox/outbox IRIs to person
* Code cleanup
* Remove another LogSQL from ActivityPub person test
* Move httpsig algos slice to an init() function
* Add actor IRI and remote interaction URL to WebFinger response
* Update TestWebFinger to check for ActivityPub IRI in aliases
* make fmt
* Force CI rerun
* WebFinger: Add CORS header and fix Href -> Template for remote interactions
The CORS header is needed due to https://datatracker.ietf.org/doc/html/rfc7033#section-5 and fixes some Peertube <-> Gitea federation issues
* make lint-backend
* Make sure Person endpoint has Content-Type application/activity+json and includes PreferredUsername, URL, and Icon
Setting the correct Content-Type is essential for federating with Mastodon
* Use UTC instead of GMT
* Rename pkey to pubKey
* Make sure HTTP request Date in GMT
* make fmt
* dont drop err
* Make sure API responses always refer to username in original case
Copied from what I wrote on #19133 discussion: Handling username case is a very tricky issue and I've already encountered a Mastodon <-> Gitea federation bug due to Gitea considering Ta180m and ta180m to be the same user while Mastodon thinks they are two different users. I think the best way forward is for Gitea to only use the original case version of the username for federation so other AP software don't get confused.
* Move httpsig algs constant slice to modules/setting/federation.go
* Add new federation settings to app.example.ini and config-cheat-sheet
* Return if marshalling error
* Make sure Person IRIs are generated correctly
This commit ensures that if the setting.AppURL is something like "http://127.0.0.1:42567" (like in the integration tests), a trailing slash will be added after that URL.
* If httpsig verification fails, fix Host header and try again
This fixes a very rare bug when Gitea and another AP server (confirmed to happen with Mastodon) are running on the same machine, Gitea fails to verify incoming HTTP signatures. This is because the other AP server creates the sig with the public Gitea domain as the Host. However, when Gitea receives the request, the Host header is instead localhost, so the signature verification fails. Manually changing the host header to the correct value and trying the veification again fixes the bug.
* Revert "If httpsig verification fails, fix Host header and try again"
This reverts commit f53e46c721.
The bug was actually caused by nginx messing up the Host header when reverse-proxying since I didn't have the line `proxy_set_header Host $host;` in my nginx config for Gitea.
* Go back to using ap.IRI to generate inbox and outbox IRIs
* use const for key values
* Update routers/web/webfinger.go
* Use ctx.JSON in Person response to make code cleaner
* Revert "Use ctx.JSON in Person response to make code cleaner"
This doesn't work because the ctx.JSON() function already sends the response out and it's too late to edit the headers.
This reverts commit 95aad98897.
* Use activitypub.ActivityStreamsContentType for Person response Content Type
* Limit maximum ActivityPub request and response sizes to a configurable setting
* Move setting key constants to models/user/setting_keys.go
* Fix failing ActivityPubPerson integration test by checking the correct field for username
* Add a warning about changing settings that can break federation
* Add better comments
* Don't multiply Federation.MaxSize by 1<<20 twice
* Add more better comments
* Fix failing ActivityPubMissingPerson test
We now use ctx.ContextUser so the message printed out when a user does not exist is slightly different
* make generate-swagger
For some reason I didn't realize that /templates/swagger/v1_json.tmpl was machine-generated by make generate-swagger... I've been editing it by hand for three months! 🤦
* Move getting the RFC 2616 time to a separate function
* More code cleanup
* Update go-ap to fix empty liked collection and removed unneeded HTTP headers
* go mod tidy
* Add ed25519 to httpsig algorithms
* Use go-ap/jsonld to add @context and marshal JSON
* Change Gitea user agent from the default to Gitea/Version
* Use ctx.ServerError and remove all remote interaction code from webfinger.go
gitea doctor --run check-db-consistency is currently broken due to an incorrect
and old use of Count() with a string.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* clean git support for ver < 2.0
* fine tune tests for markup (which requires git module)
* remove unnecessary comments
* try to fix tests
* try test again
* use const for GitVersionRequired instead of var
* try to fix integration test
* Refactor CheckAttributeReader to make a *git.Repository version
* update document for commit signing with Gitea's internal gitconfig
* update document for commit signing with Gitea's internal gitconfig
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- Don't specify the field in `Count` instead use `Cols` for this.
- Call `log.Error` when a error occur.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* When non-admin users use code search, get code unit accessible repos in one main query
* Modified some comments to match the changes
* Removed unnecessary check for Access Mode in Collaboration table
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Move access and repo permission to models/perm/access
* fix test
* fix git test
* Move functions sequence
* Some improvements per @KN4CK3R and @delvh
* Move issues related code to models/issues
* Move some issues related sub package
* Merge
* Fix test
* Fix test
* Fix test
* Fix test
* Rename some files
* Move access and repo permission to models/perm/access
* fix test
* Move some git related files into sub package models/git
* Fix build
* fix git test
* move lfs to sub package
* move more git related functions to models/git
* Move functions sequence
* Some improvements per @KN4CK3R and @delvh
* Move some repository related code into sub package
* Move more repository functions out of models
* Fix lint
* Some performance optimization for webhooks and others
* some refactors
* Fix lint
* Fix
* Update modules/repository/delete.go
Co-authored-by: delvh <dev.lh@web.de>
* Fix test
* Merge
* Fix test
* Fix test
* Fix test
* Fix test
Co-authored-by: delvh <dev.lh@web.de>
Upgrade builder to v0.3.11
Upgrade xorm to v1.3.1 and fixed some hidden bugs.
Replace #19821
Replace #19834
Included #19850
Co-authored-by: zeripath <art27@cantab.net>
Milestones in archived repos should not be displayed on `/milestones`. Therefore
we should exclude these repositories from milestones page.
Fix#18257
Signed-off-by: Andrew Thornton <art27@cantab.net>
Looking through the logs of try.gitea.io I am seeing a number of reports
of being unable to APIformat stopwatches because the issueID is 0. These
are invalid StopWatches and they represent a db inconsistency.
This PR simply stops sending them to the eventsource.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
The issue was that only the actual title was converted to uppercase, but
not the prefix as specified in `WORK_IN_PROGRESS_PREFIXES`. As a result,
the following did not work:
WORK_IN_PROGRESS_PREFIXES=Draft:,[Draft],WIP:,[WIP]
One possible workaround was:
WORK_IN_PROGRESS_PREFIXES=DRAFT:,[DRAFT],WIP:,[WIP]
Then indeed one could use `Draft` (as well as `DRAFT`) in the title.
However, the link `Start the title with DRAFT: to prevent the pull request
from being merged accidentally.` showed the suggestion in uppercase; so
it is not possible to show it as `Draft`. This PR fixes it, and allows
to use `Draft` in `WORK_IN_PROGRESS_PREFIXES`.
Fixes#19779.
Co-authored-by: zeripath <art27@cantab.net>
Add ability to show source/target branches for Pull Request's list. It can be useful to see which branches are used in each PR right in the list.
Co-authored-by: Alexey Korobkov <akorobkov@cian.ru>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Allows repo search to match against "owner/repo" pattern strings
* Gofumpt
* Adds test case for "owner/repo" style repo search
* With "owner/repo" search terms, prioritise results which match the owner field
* Fixes unquoted SQL string in repo search
* Makes comments in body text/title return the base page URL instead of "" in RefCommentHTMLURL()
* Add comment explaining branch
Co-authored-by: delvh <dev.lh@web.de>
- Don't use hacky solution to limit to the correct RepoID's, instead use
current code to handle these limits. The existing code is more correct
than the hacky solution.
- Resolves#19636
- Add test-case
- Don't log the reflect struct, but instead log the ID of the struct.
This improves the error message, as you would actually know which row is
the error.
* Fix indention
Signed-off-by: kolaente <k@knt.li>
* Add option to merge a pr right now without waiting for the checks to succeed
Signed-off-by: kolaente <k@knt.li>
* Fix lint
Signed-off-by: kolaente <k@knt.li>
* Add scheduled pr merge to tables used for testing
Signed-off-by: kolaente <k@knt.li>
* Add status param to make GetPullRequestByHeadBranch reusable
Signed-off-by: kolaente <k@knt.li>
* Move "Merge now" to a seperate button to make the ui clearer
Signed-off-by: kolaente <k@knt.li>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Update web_src/js/index.js
Co-authored-by: 赵智超 <1012112796@qq.com>
* Re-add migration after merge
* Fix frontend lint
* Fix version compare
* Add vendored dependencies
* Add basic tets
* Make sure the api route is capable of scheduling PRs for merging
* Fix comparing version
* make vendor
* adopt refactor
* apply suggestion: User -> Doer
* init var once
* Fix Test
* Update templates/repo/issue/view_content/comments.tmpl
* adopt
* nits
* next
* code format
* lint
* use same name schema; rm CreateUnScheduledPRToAutoMergeComment
* API: can not create schedule twice
* Add TestGetBranchNamesForSha
* nits
* new go routine for each pull to merge
* Update models/pull.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* Update models/scheduled_pull_request_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix & add renaming sugestions
* Update services/automerge/pull_auto_merge.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* fix conflict relicts
* apply latest refactors
* fix: migration after merge
* Update models/error.go
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* Update options/locale/locale_en-US.ini
Co-authored-by: delvh <dev.lh@web.de>
* adapt latest refactors
* fix test
* use more context
* skip potential edgecases
* document func usage
* GetBranchNamesForSha() -> GetRefsBySha()
* start refactoring
* ajust to new changes
* nit
* docu nit
* the great check move
* move checks for branchprotection into own package
* resolve todo now ...
* move & rename
* unexport if posible
* fix
* check if merge is allowed before merge on scheduled pull
* debugg
* wording
* improve SetDefaults & nits
* NotAllowedToMerge -> DisallowedToMerge
* fix test
* merge files
* use package "errors"
* merge files
* add string names
* other implementation for gogit
* adapt refactor
* more context for models/pull.go
* GetUserRepoPermission use context
* more ctx
* use context for loading pull head/base-repo
* more ctx
* more ctx
* models.LoadIssueCtx()
* models.LoadIssueCtx()
* Handle pull_service.Merge in one DB transaction
* add TODOs
* next
* next
* next
* more ctx
* more ctx
* Start refactoring structure of old pull code ...
* move code into new packages
* shorter names ... and finish **restructure**
* Update models/branches.go
Co-authored-by: zeripath <art27@cantab.net>
* finish UpdateProtectBranch
* more and fix
* update datum
* template: use "svg" helper
* rename prQueue 2 prPatchCheckerQueue
* handle automerge in queue
* lock pull on git&db actions ...
* lock pull on git&db actions ...
* add TODO notes
* the regex
* transaction in tests
* GetRepositoryByIDCtx
* shorter table name and lint fix
* close transaction bevore notify
* Update models/pull.go
* next
* CheckPullMergable check all branch protections!
* Update routers/web/repo/pull.go
* CheckPullMergable check all branch protections!
* Revert "PullService lock via pullID (#19520)" (for now...)
This reverts commit 6cde7c9159a5ea75a10356feb7b8c7ad4c434a9a.
* Update services/pull/check.go
* Use for a repo action one database transaction
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* Update services/issue/status.go
Co-authored-by: delvh <dev.lh@web.de>
* use db.WithTx()
* gofmt
* make pr.GetDefaultMergeMessage() context aware
* make MergePullRequestForm.SetDefaults context aware
* use db.WithTx()
* pull.SetMerged only with context
* fix deadlock in `test-sqlite\#TestAPIBranchProtection`
* dont forget templates
* db.WithTx allow to set the parentCtx
* handle db transaction in service packages but not router
* issue_service.ChangeStatus just had caused another deadlock :/
it has to do something with how notification package is handled
* if we merge a pull in one database transaktion, we get a lock, because merge infoce internal api that cant handle open db sessions to the same repo
* ajust to current master
* Apply suggestions from code review
Co-authored-by: delvh <dev.lh@web.de>
* dont open db transaction in router
* make generate-swagger
* one _success less
* wording nit
* rm
* adapt
* remove not needed test files
* rm less diff & use attr in JS
* ...
* Update services/repository/files/commit.go
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* ajust db schema for PullAutoMerge
* skip broken pull refs
* more context in error messages
* remove webUI part for another pull
* remove more WebUI only parts
* API: add CancleAutoMergePR
* Apply suggestions from code review
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* fix lint
* Apply suggestions from code review
* cancle -> cancel
Co-authored-by: delvh <dev.lh@web.de>
* change queue identifyer
* fix swagger
* prevent nil issue
* fix and dont drop error
* as per @zeripath
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* Update integrations/git_test.go
Co-authored-by: delvh <dev.lh@web.de>
* more declarative integration tests (dedup code)
* use assert.False/True helper
Co-authored-by: 赵智超 <1012112796@qq.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* GetFeeds must always discard actions with dangling repo_id
See https://discourse.gitea.io/t/blank-page-after-login/5051/12
for a panic in 1.16.6.
* add comment to explain the dangling ID in the fixture
* loadRepoOwner must not attempt to use a nil action.Repo
* make fmt
Co-authored-by: Loïc Dachary <loic@dachary.org>
* Only check for non-finished migrating task
- Only check if a non-finished migrating task exists for a mirror before
fetching the mirror details from the database.
- Resolves#19600
- Regression: #19588
* Clarify function
- When a repository is still being migrated, don't try to fetch the
Mirror from the database. Instead skip it. This allows to visit
repositories that are still being migrated and were configured to be
mirrored.
- Resolves#19585
- Regression: #19295
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* Apply DefaultUserIsRestricted in CreateUser
* Enforce system defaults in CreateUser
Allow for overwrites with CreateUserOverwriteOptions
* Fix compilation errors
* Add "restricted" option to create user command
* Add "restricted" option to create user admin api
* Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed
* Revert "Respect default setting.Service.RegisterEmailConfirm and setting.Service.RegisterManualConfirm where needed"
This reverts commit ee95d3e8dc.
Targeting #14936, #15332
Adds a collaborator permissions API endpoint according to GitHub API: https://docs.github.com/en/rest/collaborators/collaborators#get-repository-permissions-for-a-user to retrieve a collaborators permissions for a specific repository.
### Checks the repository permissions of a collaborator.
`GET` `/repos/{owner}/{repo}/collaborators/{collaborator}/permission`
Possible `permission` values are `admin`, `write`, `read`, `owner`, `none`.
```json
{
"permission": "admin",
"role_name": "admin",
"user": {}
}
```
Where `permission` and `role_name` hold the same `permission` value and `user` is filled with the user API object. Only admins are allowed to use this API endpoint.
Adds a feature [like GitHub has](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) (step 7).
If you create a new PR from a forked repo, you can select (and change later, but only if you are the PR creator/poster) the "Allow edits from maintainers" option.
Then users with write access to the base branch get more permissions on this branch:
* use the update pull request button
* push directly from the command line (`git push`)
* edit/delete/upload files via web UI
* use related API endpoints
You can't merge PRs to this branch with this enabled, you'll need "full" code write permissions.
This feature has a pretty big impact on the permission system. I might forgot changing some things or didn't find security vulnerabilities. In this case, please leave a review or comment on this PR.
Closes#17728
Co-authored-by: 6543 <6543@obermui.de>
* Set correct PR status on 3way on conflict checking
- When 3-way merge is enabled for conflict checking, it has a new
interesting behavior that it doesn't return any error when it found a
conflict, so we change the condition to not check for the error, but
instead check if conflictedfiles is populated, this fixes a issue
whereby PR status wasn't correctly on conflicted PR's.
- Refactor the mergeable property(which was incorrectly set and lead me this
bug) to be more maintainable.
- Add a dedicated test for conflicting checking, so it should prevent
future issues with this.
* Fix linter
* remove error who is none
* use setupSessionNoLimit instead of setupSessionWithLimit when no pagination
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Follows: #19284
* The `CopyDir` is only used inside test code
* Rewrite `ToSnakeCase` with more test cases
* The `RedisCacher` only put strings into cache, here we use internal `toStr` to replace the legacy `ToStr`
* The `UniqueQueue` can use string as ID directly, no need to call `ToStr`
The main purpose is to refactor the legacy `unknwon/com` package.
1. Remove most imports of `unknwon/com`, only `util/legacy.go` imports the legacy `unknwon/com`
2. Use golangci's depguard to process denied packages
3. Fix some incorrect values in golangci.yml, eg, the version should be quoted string `"1.18"`
4. Use correctly escaped content for `go-import` and `go-source` meta tags
5. Refactor `com.Expand` to our stable (and the same fast) `vars.Expand`, our `vars.Expand` can still return partially rendered content even if the template is not good (eg: key mistach).
Follows #19266, #8553, Close#18553, now there are only three `Run..(&RunOpts{})` functions.
* before: `stdout, err := RunInDir(path)`
* now: `stdout, _, err := RunStdString(&git.RunOpts{Dir:path})`
This make checks in one single place so they dont differ and maintainer can not forget a check in one place while adding it to the other .... ( as it's atm )
Fix:
* The API does ignore issue dependencies where Web does not
* The API checks if "IsSignedIfRequired" where Web does not - UI probably do but nothing will some to craft custom requests
* Default merge message is crafted a bit different between API and Web if not set on specific cases ...
Adding additional usernames which are already routes, remove unused ones.
In future, avoid reserving names as much as possible, use `/-/` in path instead.
There is yet another problem with conflicted files not being reset when
the test patch resolves them.
This PR adjusts the code for checkConflicts to reset the ConflictedFiles
field immediately at the top. It also adds a reset to conflictedFiles
for the manuallyMerged and a shortcut for the empty status in
protectedfiles.
Signed-off-by: Andrew Thornton <art27@cantab.net>
The last PR about clone buttons introduced an JS error when visiting an empty repo page:
* https://github.com/go-gitea/gitea/pull/19028
* `Uncaught ReferenceError: isSSH is not defined`, because the variables are scoped and doesn't share between sub templates.
This:
1. Simplify `templates/repo/clone_buttons.tmpl` and make code clear
2. Move most JS code into `initRepoCloneLink`
3. Remove unused `CloneLink.Git`
4. Remove `ctx.Data["DisableSSH"] / ctx.Data["ExposeAnonSSH"] / ctx.Data["DisableHTTP"]`, and only set them when is is needed (eg: deploy keys / ssh keys)
5. Introduce `Data["CloneButton*"]` to provide data for clone buttons and links
6. Introduce `Data["RepoCloneLink"]` for the repo clone link (not the wiki)
7. Remove most `ctx.Data["PageIsWiki"]` because it has been set in the `/wiki` middleware
8. Remove incorrect `quickstart` class in `migrating.tmpl`
There is a bug in the system webhooks whereby the active state is not checked when
webhooks are prepared and there is a bug that deactivating webhooks do not prevent
queued deliveries.
* Only add SystemWebhooks to the prepareWebhooks list if they are active
* At the time of delivery if the underlying webhook is not active mark it
as "delivered" but with a failed delivery so it does not get delivered.
Fix#19220
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Touch mirrors on even on fail to update
If a mirror fails to be synchronised it should be pushed to the bottom of the queue
of the awaiting mirrors to be synchronised. At present if there LIMIT number of
broken mirrors they can effectively prevent all other mirrors from being synchronized
as their last_updated time will remain earlier than other mirrors.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Set the default branch for repositories generated from templates
* Allows default branch to be set through the API for repos generated from templates
* Update swagger API template
* Only set default branch to the one from the template if not specified
* Use specified default branch if it exists while generating git commits
Fix#19082
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
- Make a restriction on which issues can be shown based on if you the user or team has write permission to the repository.
- Fixes a issue whereby you wouldn't see any associated issues with a specific team on a organization if you wasn't a member(fixed by zeroing the User{ID} in the options).
- Resolves#18913
* Clean up protected_branches when deleting user
fixes#19094
* Clean up protected_branches when deleting teams
* fix issue
Co-authored-by: Lauris BH <lauris@nix.lv>
Storing the foreign identifier of an imported issue in the database is a prerequisite to implement idempotent migrations or mirror for issues. It is a baby step towards mirroring that introduces a new table.
At the moment when an issue is created by the Gitea uploader, it fails if the issue already exists. The Gitea uploader could be modified so that, instead of failing, it looks up the database to find an existing issue. And if it does it would update the issue instead of creating a new one. However this is not currently possible because an information is missing from the database: the foreign identifier that uniquely represents the issue being migrated is not persisted. With this change, the foreign identifier is stored in the database and the Gitea uploader will then be able to run a query to figure out if a given issue being imported already exists.
The implementation of mirroring for issues, pull requests, releases, etc. can be done in three steps:
1. Store an identifier for the element being mirrored (issue, pull request...) in the database (this is the purpose of these changes)
2. Modify the Gitea uploader to be able to update an existing repository with all it contains (issues, pull request...) instead of failing if it exists
3. Optimize the Gitea uploader to speed up the updates, when possible.
The second step creates code that does not yet exist to enable idempotent migrations with the Gitea uploader. When a migration is done for the first time, the behavior is not changed. But when a migration is done for a repository that already exists, this new code is used to update it.
The third step can use the code created in the second step to optimize and speed up migrations. For instance, when a migration is resumed, an issue that has an update time that is not more recent can be skipped and only newly created issues or updated ones will be updated. Another example of optimization could be that a webhook notifies Gitea when an issue is updated. The code triggered by the webhook would download only this issue and call the code created in the second step to update the issue, as if it was in the process of an idempotent migration.
The ForeignReferences table is added to contain local and foreign ID pairs relative to a given repository. It can later be used for pull requests and other artifacts that can be mirrored. Although the foreign id could be added as a single field in issues or pull requests, it would need to be added to all tables that represent something that can be mirrored. Creating a new table makes for a simpler and more generic design. The drawback is that it requires an extra lookup to obtain the information. However, this extra information is only required during migration or mirroring and does not impact the way Gitea currently works.
The foreign identifier of an issue or pull request is similar to the identifier of an external user, which is stored in reactions, issues, etc. as OriginalPosterID and so on. The representation of a user is however different and the ability of users to link their account to an external user at a later time is also a logic that is different from what is involved in mirroring or migrations. For these reasons, despite some commonalities, it is unclear at this time how the two tables (foreign reference and external user) could be merged together.
The ForeignID field is extracted from the issue migration context so that it can be dumped in files with dump-repo and later restored via restore-repo.
The GetAllComments downloader method is introduced to simplify the implementation and not overload the Context for the purpose of pagination. It also clarifies in which context the comments are paginated and in which context they are not.
The Context interface is no longer useful for the purpose of retrieving the LocalID and ForeignID since they are now both available from the PullRequest and Issue struct. The Reviewable and Commentable interfaces replace and serve the same purpose.
The Context data member of PullRequest and Issue becomes a DownloaderContext to clarify that its purpose is not to support in memory operations while the current downloader is acting but is not otherwise persisted. It is, for instance, used by the GitLab downloader to store the IsMergeRequest boolean and sort out issues.
---
[source](https://lab.forgefriends.org/forgefriends/forgefriends/-/merge_requests/36)
Signed-off-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Loïc Dachary <loic@dachary.org>
* Update the webauthn_credential_id_sequence in Postgres
There is (yet) another problem with v210 in that Postgres will silently allow preset
ID insertions ... but it will not update the sequence value.
This PR simply adds a little step to the end of the v210 migration to update the
sequence number.
Users who have already migrated who find that they cannot insert new
webauthn_credentials into the DB can either run:
```bash
gitea doctor recreate-table webauthn_credential
```
or
```bash
./gitea doctor --run=check-db-consistency --fix
```
which will fix the bad sequence.
Fix#19012
Signed-off-by: Andrew Thornton <art27@cantab.net>
Yet another issue has come up where the logging from SyncMirrors does not provide
enough context. This PR adds more context to these logging events.
Related #19038
Signed-off-by: Andrew Thornton <art27@cantab.net>
The review request feature was added in https://github.com/go-gitea/gitea/pull/10756,
where the doer got explicitly excluded from available reviewers. I don't see a
functionality or security related reason to forbid this case.
As shown by GitHubs implementation, it may be useful to self-request a review,
to be reminded oneselves about reviewing, while communicating to team mates that a
review is missing.
Co-authored-by: delvh <dev.lh@web.de>
* ignore missing comment for user notifications
* instead fix bug in notifications model
* use local variable instead
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Add new feature to delete issues and pulls via API
Co-authored-by: fnetx <git@fralix.ovh>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: 6543 <6543@obermui.de>
- Add helper method to reduce redundancy
- Expand the scope from displaying days to years
- Reduce irrelevance by not displaying small units (hours, minutes, seconds) when bigger ones apply (years)
* Avoid database lookups for `DescriptionHTML`
- Don't Compose meta's for DescriptionHTML, they are only needed in
order to correctly format and show issue's but it's highly unlikely that
a repository description will refer to a local issue.
Using 125 Connections for 5 seconds: on `/explore/repos`(which is the most
noticeable usage by this function's database lookups):
Before:
Statistics Avg Stdev Max
Reqs/sec 569.41 506.05 2715.00
Latency 214.27ms 16.60ms 294.84ms
HTTP codes:
1xx - 0, 2xx - 2974, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 27.17MB/s
After:
Statistics Avg Stdev Max
Reqs/sec 1585.04 789.84 4144.56
Latency 78.89ms 15.89ms 206.94ms
HTTP codes:
1xx - 0, 2xx - 7975, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 73.85MB/s
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
* Fix page and missing return on unadopted repos API
Page must be 1 if it's not specified and it should return after sending an internal server error.
* Allow ignore pages
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- Fix regression caused by: f1b1472632
- Don't try to insert a email for Organisation(as they don't have one).
- Resolves#18891
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* logs: add the buffer logger to inspect logs during testing
Signed-off-by: Loïc Dachary <loic@dachary.org>
* migrations: add test for importing pull requests in gitea uploader
Signed-off-by: Loïc Dachary <loic@dachary.org>
* for each git.OpenRepositoryCtx, call Close
* Content is expected to return the content of the log
* test for errors before defer
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Following the merging of #17811 teams can now have differing write and readonly permissions, however the assignee list will not include teams which have mixed perms.
Further the org sidebar is no longer helpful as it can't describe these mixed permissions situations.
Fix#18572
Signed-off-by: Andrew Thornton <art27@cantab.net>
We can't depend on `latest` version of gofumpt because the output will
not be stable across versions. Lock it down to the latest version
released yesterday and run it again.
* Use email_address table to check user's email when login with email adress
* Update services/auth/signin.go
* Fix test
* Fix test
* Fix logging in with ldap username != loginname
* Fix if user does not exist yet
* Make more clear this is loginName
* Fix formatting
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
v208.go is seriously broken as it misses an ID() check. We need to no-op and remigrate all of the u2f keys.
See #18756
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix display time of milestones
* Move the SecToTime function
From the models/issue_stopwatch.go file to the modules/util package
* Rename the sec_to_time file
* Updated formatting
* Include copyright notice in sec_to_time.go
* Apply PR review suggestions
- Update copyright notice dates to 2022
- Change `1 day 3h 5min 7s` to `1d 3h 5m 7s`
* Rename hrs var and combine conditions
* Update unit tests to match new time pattern
Changed `1min` to `1m`
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Unfortunately credentialIDs in u2f are 255 bytes long which with base32 encoding
becomes 408 bytes. The default size of a xorm string field is only a VARCHAR(255)
This problem is not apparent on SQLite because strings get mapped to TEXT there.
Fix#18727
Signed-off-by: Andrew Thornton <art27@cantab.net>
- Don't let `TypeExternalTracker` or `TypeExternalWiki` influence the
minimal permission, as they won't be higher than read. So even if all
the other ones are write, these 2 will ensure that's not higher than
read.
- Partially resolves#18572 (Point 1,2,5?)
Co-authored-by: zeripath <art27@cantab.net>
There is no need to call UpdateRepoStats in the InsertIssues and
InsertPullRequests function. They are only called during migration by
the CreateIssues and CreateReviews methods of the gitea uploader.
The UpdateRepoStats function will be called by the Finish method of
the gitea uploader after all reviews and issues are inserted. Calling
it before is therefore redundant and the associated SQL requests are
not cheap.
The statistics tests done after inserting an issue or a pull request
are also removed. They predate the implementation of UpdateRepoStats,
back when the calculation of the statistics was an integral part of
the migration function. The UpdateRepoStats is now tested
independantly and these tests are no longer necessary.
Signed-off-by: singuliere <singuliere@autistici.org>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
When calling DumpRepository and RestoreRepository on the same Gitea
instance, the users are preserved: all labels, issues etc. belong to
the external user who is, in this particular case, the local user.
Dead code verifying g.gitServiceType.Name() == "" (i.e. plain git) is
removed. The function is never called because the plain git downloader
does not migrate anything that is associated to a user, by definition.
Errors returned by GetUserIDByExternalUserID are no longer ignored.
The userMap is used when the external user is not kown, which is the
most common case. It was only used when the external user exists
which happens less often and, as a result, every occurence of an
unknown external user required a SQL query.
Signed-off-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
- Switch to use `CryptoRandomBytes` instead of `CryptoRandomString`, OAuth's secrets are copied pasted and don't need to avoid dubious characters etc.
- `CryptoRandomBytes` gives ![2^256 = 1.15 * 10^77](https://render.githubusercontent.com/render/math?math=2^256%20=%201.15%20\cdot%2010^77) `CryptoRandomString` gives ![62^44 = 7.33 * 10^78](https://render.githubusercontent.com/render/math?math=62^44%20=%207.33%20\cdot%2010^78) possible states.
- Add a prefix, such that code scanners can easily grep these in source code.
- 32 Bytes + prefix
* Collaborator trust model should trust collaborators
There was an unintended regression in #17917 which leads to only
repository admin commits being trusted. This PR restores the old logic.
Fix#18501
Signed-off-by: Andrew Thornton <art27@cantab.net>
* COrrect use `UserID` in `SearchTeams`
- Use `UserID` in the `SearchTeams` function, currently it was useless
to pass such information. Now it does a INNER statement to `team_user`
which obtains UserID -> TeamID data.
- Make OrgID optional.
- Resolves#18484
* Seperate searching specific user
* Add condition back
* Use correct struct type
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* add test coverage for original author conversion during migrations
And create a function to factorize a code snippet that is repeated
five times and would otherwise be more difficult to test and maintain
consistently.
Signed-off-by: Loïc Dachary <loic@dachary.org>
* fix variable scope and int64 formatting
* add missing calls to remapExternalUser and fix misplaced %d
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
The endpoint /{username}/{reponame}/milestone/{id} is not currently restricted to
the repo. This PR restricts the milestones to those within the repo.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add config option to hide issue events
Adds a config option `HIDE_ISSUE_EVENTS` to hide most issue events (changed labels, milestones, projects...) on the issue detail page.
If this is true, only the following events (comment types) are shown:
* plain comments
* closed/reopned/merged
* reviews
* Make configurable using a list
* Add docs
* Add missing newline
* Fix merge issues
* Allow changes per user settings
* Fix lint
* Rm old docs
* Apply suggestions from code review
* Use bitsets
* Rm comment
* fmt
* Fix lint
* Use variable/constant to provide key
* fmt
* fix lint
* refactor
* Add a prefix for user setting key
* Add license comment
* Add license comment
* Update services/forms/user_form_hidden_comments.go
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* check len == 0
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: 6543 <6543@obermui.de>
This PR continues the work in #17125 by progressively ensuring that git
commands run within the request context.
This now means that the if there is a git repo already open in the context it will be used instead of reopening it.
Signed-off-by: Andrew Thornton <art27@cantab.net>
The CheckRepoStats function missed the following counters:
- label num_closed_issues & num_closed_pulls
- milestone num_closed_issues & num_closed_pulls
The update SQL statements for updating the repository
num_closed_issues & num_closed_pulls fields were repeated in three
functions (repo.CheckRepoStats, migrate.insertIssues and
models.Issue.updateClosedNum) and were moved to a single helper.
The UpdateRepoStats is implemented and called in the Finish migration method so that it happens immediately instead of wating for the
CheckRepoStats to run.
Signed-off-by: Loïc Dachary loic@dachary.org
---
[source](https://lab.forgefriends.org/forgefriends/forgefriends/-/merge_requests/34)
This contains some additional fixes and small nits related to #17957
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Migrate from U2F to Webauthn
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
we don't want reviews to count towards comments, as this needs changes
in other components as well (eg repo stats cron job, etc).
Co-authored-by: 6543 <6543@obermui.de>
* migrations: a deadline at January 1st, 1970 is valid
Do not change the deadline value if it is set to January 1st, 1970.
Setting the deadline to year 9999 when it is zero (which is equal to
January 1st, 1970) modifies a deadline set to January 1st, 1970 which
is a valid date. In addition, setting a date in year 9999 will be
converted to a null date in some cases.
Signed-off-by: Loïc Dachary <loic@dachary.org>
* tests: set milestone.deadline_unix in fixtures
The value of deadline_unix must be set to 253370764800 (i.e. 9999-01-01) in
fixtures, otherwise it will be inserted as null which leads to
unexpected errors. For instance, DumpRepository will store a null
deadline_unix as 0 (i.e. 1970-01-01) and RestoreRepository will change
it to 9999-01-01.
Signed-off-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: Loïc Dachary <loic@dachary.org>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
- Don't use `ioutil` package anymore as it doesn't anything special
anymore since Go 1.16:
```
// As of Go 1.16, the same functionality is now provided
// by package io or package os, and those implementations
// should be preferred in new code.
```
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Team permission allow different unit has different permission
* Finish the interface and the logic
* Fix lint
* Fix translation
* align center for table cell content
* Fix fixture
* merge
* Fix test
* Add deprecated
* Improve code
* Add tooltip
* Fix swagger
* Fix newline
* Fix tests
* Fix tests
* Fix test
* Fix test
* Max permission of external wiki and issues should be read
* Move team units with limited max level below units table
* Update label and column names
* Some improvements
* Fix lint
* Some improvements
* Fix template variables
* Add permission docs
* improve doc
* Fix fixture
* Fix bug
* Fix some bug
* fix
* gofumpt
* Integration test for migration (#18124)
integrations: basic test for Gitea {dump,restore}-repo
This is a first step for integration testing of DumpRepository and
RestoreRepository. It:
runs a Gitea server,
dumps a repo via DumpRepository to the filesystem,
restores the repo via RestoreRepository from the filesystem,
dumps the restored repository to the filesystem,
compares the first and second dump and expects them to be identical
The verification is trivial and the goal is to add more tests for each
topic of the dump.
Signed-off-by: Loïc Dachary <loic@dachary.org>
* Team permission allow different unit has different permission
* Finish the interface and the logic
* Fix lint
* Fix translation
* align center for table cell content
* Fix fixture
* merge
* Fix test
* Add deprecated
* Improve code
* Add tooltip
* Fix swagger
* Fix newline
* Fix tests
* Fix tests
* Fix test
* Fix test
* Max permission of external wiki and issues should be read
* Move team units with limited max level below units table
* Update label and column names
* Some improvements
* Fix lint
* Some improvements
* Fix template variables
* Add permission docs
* improve doc
* Fix fixture
* Fix bug
* Fix some bug
* Fix bug
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Aravinth Manivannan <realaravinth@batsense.net>
- The current implementation of `RandomString` doesn't give you a most-possible unique randomness. It gives you 6*`length` instead of the possible 8*`length` bits(or as `length`x bytes) randomness. This is because `RandomString` is being limited to a max value of 63, this in order to represent the random byte as a letter/digit.
- The recommendation of pbkdf2 is to use 64+ bit salt, which the `RandomString` doesn't give with a length of 10, instead of increasing 10 to a higher number, this patch adds a new function called `RandomBytes` which does give you the guarentee of 8*`length` randomness and thus corresponding of `length`x bytes randomness.
- Use hexadecimal to store the bytes value in the database, as mentioned, it doesn't play nice in order to convert it to a string. This will always be a length of 32(with `length` being 16).
- When we detect on `Authenticate`(source: db) that a user has the old format of salt, re-hash the password such that the user will have it's password hashed with increased salt.
Thanks to @zeripath for working out the rouge edges from my first commit 😄.
Co-authored-by: lafriks <lauris@nix.lv>
Co-authored-by: zeripath <art27@cantab.net>
They were previously not covered at all, either by integration tests or unit tests.
This PR also fixes a bug where the `num_comments` field was incorrectly set to include all types of comments.
It sets num_closed_issues: 0 as default in milestone unit test fixtures. If they are not set, Incr("num_closed_issues") will be a noop because the field is null.
* Add API to get issue/pull comments and events (timeline)
Adds an API to get both comments and events in one endpoint with all required data.
Closesgo-gitea/gitea#13250
* Fix swagger
* Don't show code comments (use review api instead)
* fmt
* Fix comment
* Time -> TrackedTime
* Use var directly
* Add logger
* Fix lint
* Fix test
* Add comments
* fmt
* [test] get issue directly by ID
* Update test
* Add description for changed refs
* Fix build issues + lint
* Fix build
* Use string enums
* Update swagger
* Support `page` and `limit` params
* fmt + swagger
* Use global slices
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
This PR reworked the Find pointer files feature in Settings -> LFS page.
When a LFS object is missing from database but exists in LFS content store, admin can associate it to the repository by clicking the Associate button.
This PR is not perfect (because the LFS module itself should be improved too), it's just a nice-to-have feature to help users recover their LFS repositories (eg: database was lost / table was truncated)
The GITEA_UNIT_TESTS_VERBOSE variable is an undocumented variable
introduced in 2017 (see 1028ef2def)
whose sole purpose has been to log SQL statements when running unit
tests.
It is renamed for clarity and a warning is displayed for backward
compatibility for people and scripts that know about it.
The documentation is updated to reflect this change.
When viewing issues in sorted order, some issues are duplicated across
pages and some are missing. This is caused by the lack of tie-breakers
in database queries, making pagination inconsistent.
* Add support for ssh commit signing
* Split out ssh verification to separate file
* Show ssh key fingerprint on commit page
* Update sshsig lib
* Make sure we verify against correct namespace
* Add ssh public key verification via ssh signatures
When adding a public ssh key also validate that this user actually
owns the key by signing a token with the private key.
* Remove some gpg references and make verify key optional
* Fix spaces indentation
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update templates/user/settings/keys_ssh.tmpl
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update options/locale/locale_en-US.ini
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Update models/ssh_key_commit_verification.go
Co-authored-by: Gusted <williamzijl7@hotmail.com>
* Reword ssh/gpg_key_success message
* Change Badsignature to NoKeyFound
* Add sign/verify tests
* Fix upstream api changes to user_model User
* Match exact on SSH signature
* Fix code review remarks
Co-authored-by: Gusted <williamzijl7@hotmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
This PR contains multiple fixes. The most important of which is:
* Prevent hang in git cat-file if the repository is not a valid repository
Unfortunately it appears that if git cat-file is run in an invalid
repository it will hang until stdin is closed. This will result in
deadlocked /pulls pages and dangling git cat-file calls if a broken
repository is tried to be reviewed or pulls exists for a broken
repository.
Fix#14734Fix#9271Fix#16113
Otherwise there are a few small other fixes included which this PR was initially intending to fix:
* Fix panic on partial compares due to missing PullRequestWorkInProgressPrefixes
* Fix links on pulls pages due to regression from #17551 - by making most /issues routes match /pulls too - Fix#17983
* Fix links on feeds pages due to another regression from #17551 but also fix issue with syncing tags - Fix#17943
* Add missing locale entries for oauth group claims
* Prevent NPEs if ColorFormat is called on nil users, repos or teams.
Save a bit of bandwidth by only requesting 3-times the rendered avatar
size. Factor 4 is only really beneficial on a handful of mobile phones
and I don't think they are the primary device we design for.
Configurability contributed by zeripath.
Fixes: https://github.com/go-gitea/gitea/pull/17422
Fixes: https://github.com/go-gitea/gitea/issues/16287
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
* Add missing `X-Total-Count` and fix some related bugs
Adds `X-Total-Count` header to APIs that return a list but doesn't have it yet.
Fixed bugs:
* not returned after reporting error (39eb82446c/routers/api/v1/user/star.go (L70))
* crash with index out of bounds, API issue/issueSubscriptions
I also found various endpoints that return lists but do not apply/support pagination yet:
```
/repos/{owner}/{repo}/issues/{index}/labels
/repos/{owner}/{repo}/issues/comments/{id}/reactions
/repos/{owner}/{repo}/branch_protections
/repos/{owner}/{repo}/contents
/repos/{owner}/{repo}/hooks/git
/repos/{owner}/{repo}/issue_templates
/repos/{owner}/{repo}/releases/{id}/assets
/repos/{owner}/{repo}/reviewers
/repos/{owner}/{repo}/teams
/user/emails
/users/{username}/heatmap
```
If this is not expected, an new issue should be opened.
Closes#13043
* fmt
* Update routers/api/v1/repo/issue_subscription.go
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
* Use FindAndCount
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: 6543 <6543@obermui.de>
* Add setting to OAuth handlers to override local 2FA settings
This PR adds a setting to OAuth and OpenID login sources to allow the source to
override local 2FA requirements.
Fix#13939
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix regression from #16544
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add scopes settings
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix trace logging in auth_openid
Signed-off-by: Andrew Thornton <art27@cantab.net>
* add required claim options
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Move UpdateExternalUser to externalaccount
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Allow OAuth2/OIDC to set Admin/Restricted status
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Allow use of the same group claim name for the prohibit login value
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fixup! Move UpdateExternalUser to externalaccount
* as per wxiaoguang
Signed-off-by: Andrew Thornton <art27@cantab.net>
* add label back in
Signed-off-by: Andrew Thornton <art27@cantab.net>
* adjust localisation
Signed-off-by: Andrew Thornton <art27@cantab.net>
* placate lint
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Running `make test-backend` will delete `data/` due to reloading the configuration and resetting the appdatapath.
This PR removes this unnecessary config reload but also adds extra code in to the unittest main to prevent its cleanup from deleting the wrong directory.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Move keys to models/keys
* Rename models/keys -> models/asymkey
* change the missed package name
* Fix package alias
* Fix test
* Fix docs
* Fix test
* Fix test
* merge
* Some refactors related repository model
* Move more methods out of repository
* Move repository into models/repo
* Fix test
* Fix test
* some improvements
* Remove unnecessary function
* Fix a panic in NotifyCreateIssueComment (caused by string truncation)
* more unit tests
* refactor
* fix some edge cases
* use SplitStringAtByteN for comment content
* Refactor install page (db type)
* set correct default DB HOST for different DB TYPE
* remove legacy TiDB from documents
* unify the usage of DB TYPE, in code we only use "mysql". "MySQL" is only shown to users for friendly name.
* Gitea can use TiDB via MySQL protocol
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Check if column exist before rename if exist, just return with no error
* Also check if errors column exist
* Add comment for migration
* Fix sqlite test
* Improve install code to avoid low-level mistakes.
If a user tries to do a re-install in a Gitea database, they gets a warning and double check.
When Gitea runs, it never create empty app.ini automatically.
Also some small (related) refactoring:
* Refactor db.InitEngine related logic make it more clean (especially for the install code)
* Move some i18n strings out from setting.go to make the setting.go can be easily maintained.
* Show errors in CLI code if an incorrect app.ini is used.
* APP_DATA_PATH is created when installing, and checked when starting (no empty directory is created any more).
sshd(8) list restrict as a future-proof way to restrict feature
enabled in ssh. It is supported since OpenSSH 7.2, out since
2016-02-29.
OpenSSH will ignore unknown options (see sshauthopt_parse in
auth-options.c), so it should be safe to add the option and
no-user-rc.
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* More pleasantly handle broken or missing git repositories
In #17742 it was noted that there a completely invalid git repository underlying a
repo on gitea.com. This happened due to a problem during a migration however, it
is not beyond the realms of possibility that a corruption could occur to another
user.
This PR adds a check to RepoAssignment that will detect if a repository loading has
failed due to an absent git repository. It will then show a page suggesting the user
contacts the administrator or deletes the repository.
Fix#17742
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Update options/locale/locale_en-US.ini
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Remove unnecessary functions of User struct
* Move more database methods out of user struct
* Move more database methods out of user struct
* Fix template failure
* Fix bug
* Remove finished FIXME
* remove unnecessary code
* Improvements to content history
* initialize content history when making an edit to an old item created before the introduction of content history
* show edit history for code comments on pull request files tab
* Fix a flaw in keepLimitedContentHistory
Fix a flaw in keepLimitedContentHistory, the first and the last should never be deleted
* Remove obsolete eager initialization of content history
Use hostmacher to replace matchlist.
And we introduce a better DialContext to do a full host/IP check, otherwise the attackers can still bypass the allow/block list by a 302 redirection.
* Use a standalone struct name for Organization
* recover unnecessary change
* make the code readable
* Fix template failure
* Fix template failure
* Move HasMemberWithUserID to org
* Fix test
* Remove unnecessary user type check
* Fix test
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
- The code will get the first and second character `link[{0,1]]`.
However in a rare case the `link` could have 1 character and thus the
`link[1]` will create a panic.
There are multiple places where Gitea does not properly escape URLs that it is building and there are multiple places where it builds urls when there is already a simpler function available to use this.
This is an extensive PR attempting to fix these issues.
1. The first commit in this PR looks through all href, src and links in the Gitea codebase and has attempted to catch all the places where there is potentially incomplete escaping.
2. Whilst doing this we will prefer to use functions that create URLs over recreating them by hand.
3. All uses of strings should be directly escaped - even if they are not currently expected to contain escaping characters. The main benefit to doing this will be that we can consider relaxing the constraints on user names and reponames in future.
4. The next commit looks at escaping in the wiki and re-considers the urls that are used there. Using the improved escaping here wiki files containing '/'. (This implementation will currently still place all of the wiki files the root directory of the repo but this would not be difficult to change.)
5. The title generation in feeds is now properly escaped.
6. EscapePound is no longer needed - urls should be PathEscaped / QueryEscaped as necessary but then re-escaped with Escape when creating html with locales Signed-off-by: Andrew Thornton <art27@cantab.net>
Signed-off-by: Andrew Thornton <art27@cantab.net>
* feat: Allow multiple tags on comments
- Allow for multiples tags(Currently Poster + {Owner, Writer}).
- Utilize the Poster tag within the commentTag function and remove the
checking from templates.
- Use bitwise on CommentTags to enable specific tags.
- Don't show poster tag(view_content.tmpl) on the initial issue comment.
* Change parameters naming
* Change function name
* refactor variable wording
* Merge 'master' branch into 'tags-comments' branch
* Change naming
* `tag` -> `role`
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Fix 500 when a comment was deleted which has a notification
* Tolerate missing Comment in other places too
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
This change enables the usage of U2F without being forced to enroll an TOTP authenticator.
The `/user/auth/u2f` has been changed to hide the "use TOTP instead" bar if TOTP is not enrolled.
Fixes#5410Fixes#17495
* Fix stat chunks searching
- Fixes a issue whereby the given chunk of issueIDs wasn't respected and
thus the returned results where not the correct results.
* Add tests
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
- This will only allow `GetDeletedBranchByID` to return deletedBranch
which are on the repo, and thus don't return a deletedBranch from
another repo.
- This just should prevent possible bugs in the futher when a code is
passing the wrong ID into this function.
* Simplify Gothic to use our session store instead of creating a different store
We have been using xormstore to provide a separate session store for our OAuth2 logins
however, this relies on using gorilla context and some doubling of our session storing.
We can however, simplify and simply use our own chi-based session store. Thus removing
a cookie and some of the weirdness with missing contexts.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* as per review
Signed-off-by: Andrew Thornton <art27@cantab.net>
* as per review
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Handle MaxTokenLength
Signed-off-by: Andrew Thornton <art27@cantab.net>
* oops
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
CountOrphanedObjects needs to quote the table it is joining with as this table may
be `user`.
Fix#17485
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Run Migrate in Install rather than just SyncTables
The underlying problem in #17328 appears to be that users are re-running the install
page during upgrades. The function that tests and creates the db did not intend for
this and thus instead the migration scripts being run - a simple sync tables occurs.
This then causes a weird partially migrated DB which causes, in this release cycle,
the duplicate column in task table error. It is likely the cause of some weird
partial migration errors in other cycles too.
This PR simply ensures that the migration scripts are also run at this point too.
Fix#17328
Signed-off-by: Andrew Thornton <art27@cantab.net>
There is a small bug in the way that repo access is checked in
repoAssignment: Accessibility is checked by checking if the user has a
marked access to the repository instead of checking if the user has any
team granted access.
This PR changes this permissions check to use HasAccess() which does the
correct test. There is also a fix in the release api ListReleases where
it should return draft releases if the user is a member of a team with
write access to the releases.
The PR also adds a testcase.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Improve: make diff result better, make the HTML element fit the full height in the content history diff dialog
* Bug fix: when edit the main issue, the poster is wrongly set to the issue poster
We have the `AppState` module now, it can store app related data easily. We do not need to create separate tables for each feature.
So the update checker can use `AppState` instead of a one-row dedicate table.
And the code of update checker is moved from `models` to `modules`.
Gitea writes its own AppPath into git hook scripts. If Gitea's AppPath changes, then the git push will fail.
This PR:
* Introduce an AppState module, it can persist app states into database
* During GlobalInit, Gitea will check if the current AppPath is the same as last one. If they don't match, Gitea will sync git hooks.
* Refactor some code to make them more clear.
* Also, "Detect if gitea binary's name changed" #11341 is related, we call models.RewriteAllPublicKeys to update ssh authorized_keys file
* Don't panic if we fail to parse a U2FRegistration data
Downgrade logging statement from Fatal to Error so that errors parsing
U2FRegistration data does not panic; instead, the invalid key will be
skipped and we will attempt to parse the next one, if available.
Signed-off-by: David Jimenez <dvejmz@sgfault.com>
* Ensure that git daemon export ok is created for mirrors
There is an issue with #16508 where it appears that create repo requires that the
repo does not exist. This causes #17241 where an error is reported because of this.
This PR fixes this and also runs update-server-info for mirrors and generated repos.
Fix#17241
Signed-off-by: Andrew Thornton <art27@cantab.net>
It makes Admin's life easier to filter users by various status.
* introduce window.config.PageData to pass template data to javascript module and small refactor
move legacy window.ActivityTopAuthors to window.config.PageData.ActivityTopAuthors
make HTML structure more IDE-friendly in footer.tmpl and head.tmpl
remove incorrect <style class="list-search-style"></style> in head.tmpl
use log.Error instead of log.Critical in admin user search
* use LEFT JOIN instead of SubQuery when admin filters users by 2fa. revert non-en locale.
* use OptionalBool instead of status map
* refactor SearchUserOptions.toConds to SearchUserOptions.toSearchQueryBase
* add unit test for user search
* only allow admin to use filters to search users
* issue content history
* Use timeutil.TimeStampNow() for content history time instead of issue/comment.UpdatedUnix (which are not updated in time)
* i18n for frontend
* refactor
* clean up
* fix refactor
* re-format
* temp refactor
* follow db refactor
* rename IssueContentHistory to ContentHistory, remove empty model tags
* fix html
* use avatar refactor to generate avatar url
* add unit test, keep at most 20 history revisions.
* re-format
* syntax nit
* Add issue content history table
* Update models/migrations/v197.go
Co-authored-by: 6543 <6543@obermui.de>
* fix merge
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
- Update default branch if needed
- Update protected branch if needed
- Update all not merged pull request base branch name
- Rename git branch
- Record this rename work and auto redirect for old branch on ui
Signed-off-by: a1012112796 <1012112796@qq.com>
Co-authored-by: delvh <dev.lh@web.de>
It is possible that a keyring can contain duplicate keys on a keyring due to jpegs or
other layers. This currently leads to a confusing error for the user - where we report
a duplicate key insertion.
This PR simply coalesces keys into one key if there are duplicates.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Why this refactor
The goal is to move most files from `models` package to `models.xxx` package. Many models depend on avatar model, so just move this first.
And the existing logic is not clear, there are too many function like `AvatarLink`, `RelAvatarLink`, `SizedRelAvatarLink`, `SizedAvatarLink`, `MakeFinalAvatarURL`, `HashedAvatarLink`, etc. This refactor make everything clear:
* user.AvatarLink()
* user.AvatarLinkWithSize(size)
* avatars.GenerateEmailAvatarFastLink(email, size)
* avatars.GenerateEmailAvatarFinalLink(email, size)
And many duplicated code are deleted in route handler, the handler and the model share the same avatar logic now.
* Nicely handle missing user in collaborations
It is possible to have a collaboration in a repository which refers to a no-longer
existing user. This causes the repository transfer to fail with an unusual error.
This PR makes `repo.getCollaborators()` nicely handle the missing user by ghosting
the collaboration but also adds consistency check. It also adds an
Access consistency check.
Fix#17044
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
There was a serious issue with the `gitea dump` command in 1.14.3-1.14.6 which led to corruption of the `config` field of the `repo_unit` table.
This PR adds a doctor command to attempt to fix the broken repo_units. Users affected by #16961 should run:
```
gitea doctor --fix --run fix-broken-repo-units
```
Fix#16961
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Allow LDAP Sources to provide Avatars
Add setting to LDAP source to allow it to provide an Avatar.
Currently this is required to point to the image bytes.
Fix#4144
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Rename as Avatar Attribute (drop JPEG)
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Always synchronize avatar if there is change
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Actually get the avatar from the ldap
Signed-off-by: Andrew Thornton <art27@cantab.net>
* clean-up
Signed-off-by: Andrew Thornton <art27@cantab.net>
* use len()>0 rather than != ""
Signed-off-by: Andrew Thornton <art27@cantab.net>
* slight shortcut in IsUploadAvatarChanged
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* DBContext is just a Context
This PR removes some of the specialness from the DBContext and makes it context
This allows us to simplify the GetEngine code to wrap around any context in future
and means that we can change our loadRepo(e Engine) functions to simply take contexts.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix unit tests
Signed-off-by: Andrew Thornton <art27@cantab.net>
* another place that needs to set the initial context
Signed-off-by: Andrew Thornton <art27@cantab.net>
* avoid race
Signed-off-by: Andrew Thornton <art27@cantab.net>
* change attachment error
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix commit status index problem
* remove unused functions
* Add fixture and test for migration
* Fix lint
* Fix fixture
* Fix lint
* Fix test
* Fix bug
* Fix bug
The io/ioutil package has been deprecated as of Go 1.16, see
https://golang.org/doc/go1.16#ioutil. This commit replaces the existing
io/ioutil functions with their new definitions in io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
This PR adds a `ListOptions` type which is not paged but uses absolute values. It is implemented as discussed in Discord.
Extracted from #16510 to clean that PR.
When converting repositories from forks to normal the root NumFork needs to be
decremented too.
Fix#17026
Signed-off-by: Andrew Thornton <art27@cantab.net>
The rollback functionality in
services/repository/repository.go:ForkRepository is incorrect and could
lead to a deadlock as it uses DeleteRepository to delete the rolled-back
repository - a function which creates its own transaction.
This PR adjusts the rollback function to only use RemoveAll as any
database changes will be automatically rolled-back. It also handles
panics and adjusts the Close within WithTx to ensure that if there is a
panic the session will always be closed.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Fixes#16381
Note that changes to unprotected files via the web editor still cannot be pushed directly to the protected branch. I could easily add such support for edits and deletes if needed. But for adding, uploading or renaming unprotected files, it is not trivial.
* Extract & Move GetAffectedFiles to modules/git
if AllowedUserVisibilityModes allow only public & limited, and orgs can be private, a user can create a repo to that organisation whitch will result in an update of the user. On this call the user is validaten and will be rejected since private is not allowed, but its not an user its an valid org ...
Co-authored-by: Alexey 〒erentyev <axifnx@gmail.com>
When create a new issue or comment and paste/upload an attachment/image, it will not assign an issue id before submit. So if user give up the creating, the attachments will lost key feature and become dirty content. We don't know if we need to delete the attachment even if the repository deleted.
This PR add a repo_id in attachment table so that even if a new upload attachment with no issue_id or release_id but should have repo_id. When deleting a repository, they could also be deleted.
Co-authored-by: 6543 <6543@obermui.de>
Calculate and return the number of Repositories on the dashboard
Organization list.
This PR restores some of the logic that was removed in #14032 to
calculate the number of repos on the dashboard orgs list.
Fix#16648
Replaces #16799
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Fix dump and restore
* return different error message for get commit
* Fix missing delete release attachment when deleting repository
* Fix ci and add some comments
Co-authored-by: zeripath <art27@cantab.net>
* Refactor the fork service slightly to take ForkRepoOptions
This reduces the number of places we need to change if we want to add other
options during fork time.
Signed-off-by: Kyle Evans <kevans@FreeBSD.org>
* Fix integrations and tests after ForkRepository refactor
Signed-off-by: Kyle Evans <kevans@FreeBSD.org>
* Update OldRepo -> BaseRepo
Signed-off-by: Kyle Evans <kevans@FreeBSD.org>
* gofmt pass
Signed-off-by: Kyle Evans <kevans@FreeBSD.org>
#16831 has occurred because of a missed regression. This PR adds a simple test to
try to prevent this occuring again.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Make the group_id a primary key in issue_index. This already has an unique index
and therefore is a good candidate for becoming a primary key.
This PR also changes all other uses of this table to add the group_id as the
primary key.
Fix#16802
Signed-off-by: Andrew Thornton <art27@cantab.net>
The MySQL indexes are not being renamed at the same time as RENAME table despite the
CASCADE. Therefore it is probably better to just recreate the indexes instead.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
One of the issues holding back performance of the API is the problem of hashing.
Whilst banning BASIC authentication with passwords will help, the API Token scheme
still requires a PBKDF2 hash - which means that heavy API use (using Tokens) can
still cause enormous numbers of hash computations.
A slight solution to this whilst we consider moving to using JWT based tokens and/or
a session orientated solution is to simply cache the successful tokens. This has some
security issues but this should be balanced by the security issues of load from
hashing.
Related #14668
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Add info about list endpoints to CONTRIBUTING.md
* Let all list endpoints return X-Total-Count header
* Add TODOs for GetCombinedCommitStatusByRef
* Fix models/issue_stopwatch.go
* Rrefactor models.ListDeployKeys
* Introduce helper func and use them for SetLinkHeader related func
* Restore compatibility with SQLServer 2008 R2 in migrations
`ALTER TABLE DROP ... IF EXISTS ...` is only supported in SQL Server >16.
The `IF EXISTS` here is a belt-and-braces and does not need to be present. Therefore
can be dropped.
We need to figure out some way of restricting our SQL syntax against the minimum
version of SQL Server we will support.
My suspicion is that `ALTER DATABASE database_name SET COMPATIBILITY_LEVEL = 100` may
do that but there may be other side-effects so I am not whether to do that.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* try just dropping the index only
Signed-off-by: Andrew Thornton <art27@cantab.net>
* use lowercase for system tables
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Fix add authentication page
There is a regression in #16199 whereby the add authentication page
fails to react to the change in selected type.
This is due to the String() method on the LoginSourceType which is ameliorated
with an Int() function being added.
Following on from this there are a few other related bugs.
Fix#16541
Signed-off-by: Andrew Thornton <art27@cantab.net>
`models` does far too much. In particular it handles all `UserSignin`.
It shouldn't be responsible for calling LDAP, SMTP or PAM for signing in.
Therefore we should move this code out of `models`.
This code has to depend on `models` - therefore it belongs in `services`.
There is a package in `services` called `auth` and clearly this functionality belongs in there.
Plan:
- [x] Change `auth.Auth` to `auth.Method` - as they represent methods of authentication.
- [x] Move `models.UserSignIn` into `auth`
- [x] Move `models.ExternalUserLogin`
- [x] Move most of the `LoginVia*` methods to `auth` or subpackages
- [x] Move Resynchronize functionality to `auth`
- Involved some restructuring of `models/ssh_key.go` to reduce the size of this massive file and simplify its files.
- [x] Move the rest of the LDAP functionality in to the ldap subpackage
- [x] Re-factor the login sources to express an interfaces `auth.Source`?
- I've done this through some smaller interfaces Authenticator and Synchronizable - which would allow us to extend things in future
- [x] Now LDAP is out of models - need to think about modules/auth/ldap and I think all of that functionality might just be moveable
- [x] Similarly a lot Oauth2 functionality need not be in models too and should be moved to services/auth/source/oauth2
- [x] modules/auth/oauth2/oauth2.go uses xorm... This is naughty - probably need to move this into models.
- [x] models/oauth2.go - mostly should be in modules/auth/oauth2 or services/auth/source/oauth2
- [x] More simplifications of login_source.go may need to be done
- Allow wiring in of notify registration - *this can now easily be done - but I think we should do it in another PR* - see #16178
- More refactors...?
- OpenID should probably become an auth Method but I think that can be left for another PR
- Methods should also probably be cleaned up - again another PR I think.
- SSPI still needs more refactors.* Rename auth.Auth auth.Method
* Restructure ssh_key.go
- move functions from models/user.go that relate to ssh_key to ssh_key
- split ssh_key.go to try create clearer function domains for allow for
future refactors here.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* Update templates/admin/hook_new.tmpl
Co-authored-by: a1012112796 <1012112796@qq.com>
* Update services/webhook/wechatwork.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* 修善wechatwork
* 修善wechatwork
* fix
* Update locale_cs-CZ.ini
fix
* fix build
* fix
* fix build
* make webhooks.zh-cn.md
* delet unnecessary blank line
* delet unnecessary blank line
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* Update templates/admin/hook_new.tmpl
Co-authored-by: a1012112796 <1012112796@qq.com>
* Update services/webhook/wechatwork.go
Co-authored-by: a1012112796 <1012112796@qq.com>
* 修善wechatwork
* 修善wechatwork
* fix
* fix build
* fix
* fix build
* make webhooks.zh-cn.md
* delet unnecessary blank line
* delet unnecessary blank line
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* fix
* fix
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* fix wechat
* fix wechat
* fix wechat
* fix wechat
* Fix invalid params and typo of email templates (#16394)
Signed-off-by: Meano <meanocat@gmail.com>
* Add LRU mem cache implementation (#16226)
The current default memory cache implementation is unbounded in size and number of
objects cached. This is hardly ideal.
This PR proposes creating a TwoQueue LRU cache as the underlying cache for Gitea.
The cache is limited by the number of objects stored in the cache (rather than size)
for simplicity. The default number of objects is 50000 - which is perhaps too small
as most of our objects cached are going to be much less than 1kB.
It may be worth considering using a different LRU implementation that actively limits
sizes or avoids GC - however, this is just a beginning implementation.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* [skip ci] Updated translations via Crowdin
* Replace `plugins/docker` with `techknowlogick/drone-docker`in ci (#16407)
* plugins/docker -> techknowlogick/drone-docker
* It is multi-arch
* docs: rewrite email setup (#16404)
* Add intro for both the docs page and mailer methods
* Fix numbering level in SMTP section
* Recommends implicit TLS
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
* Validate Issue Index before querying DB (#16406)
* Fix external renderer (#16401)
* fix external renderer
* use GBackground context as fallback
* no fallback, return error
Co-authored-by: Lauris BH <lauris@nix.lv>
* Add checkbox to delete pull branch after successful merge (#16049)
* Add checkbox to delete pull branch after successful merge
* Omit DeleteBranchAfterMerge field in json
* Log a warning instead of error when PR head branch deleted
* Add DefaultDeleteBranchAfterMerge to PullRequestConfig
* Add support for delete_branch_after_merge via API
* Fix for API: the branch should be deleted from the HEAD repo
If head and base repo are the same, reuse the already opened ctx.Repo.GitRepo
* Don't delegate to CleanupBranch, only reuse branch deletion code
CleanupBranch contains too much logic that has already been performed by the Merge
* Reuse gitrepo in MergePullRequest
Co-authored-by: Andrew Thornton <art27@cantab.net>
* [skip ci] Updated translations via Crowdin
* Detect encoding changes while parsing diff (#16330)
* Detect encoding changes while parsing diff
* Let branch/tag name be a valid ref to get CI status (#16400)
* fix #16384#
* refactor: move shared helper func to utils package
* extend Tests
* use ctx.Repo.GitRepo if not nil
* fix
* fix
* 企业微信webhook
* 企业微信webhook
* 企业微信webhook
* fix build
* fix build
* Apply suggestions from code review
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-authored-by: myheavily <myheavily>
Co-authored-by: zhaoxin <gitea@fake.local>
Co-authored-by: Meano <Meano@foxmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: GiteaBot <teabot@gitea.io>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Bagas Sanjaya <bagasdotme@gmail.com>
Co-authored-by: Norwin <noerw@users.noreply.github.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Jimmy Praet <jimmy.praet@telenet.be>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Somewhere along the line the creation of git-daemon-export-ok
files disappeared but the updating of these files when
repo visibility changes remained. The problem is that the
current state will create files even when the org or user
is private.
This PR restores creation correctly.
Fix#15521
Signed-off-by: Andrew Thornton <art27@cantab.net>
One of the reasons why #16447 was needed and why #16268 was needed in
the first place was because it appears that editing ldap configuration
doesn't get tested.
This PR therefore adds a basic test that will run the edit pipeline.
In doing so it's now clear that #16447 and #16268 aren't actually
solving #16252. It turns out that what actually happens is that is that
the bytes are actually double encoded.
This PR now changes the json unmarshal wrapper to handle this double
encode.
Fix#16252
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Unfortunately #16268 contained a terrible error, whereby there was a double
indirection taken when unmarshalling the source data. This fatally breaks
authentication configuration reading.
Fix#16342
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Retry rename on lock induced failures
Due to external locking on Windows it is possible for an
os.Rename to fail if the files or directories are being
used elsewhere.
This PR simply suggests retrying the rename again similar
to how we handle the os.Remove problems.
Fix#16427
Signed-off-by: Andrew Thornton <art27@cantab.net>
* resolve CI fail
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* fix: primary email cannot be activated
* Primary email should be activated together with user account when
'RegisterEmailConfirm' is enabled.
* To fix the existing error state. When 'RegisterEmailConfirm' is enabled, the
admin should have permission to modify the activations status of user email.
And the user should be allowed to send activation to primary email.
* Only judge whether email is primary from email_address table.
* Improve logging and refactor isEmailActive
Co-authored-by: zeripath <art27@cantab.net>
* Add option to provide signed token to verify key ownership
Currently we will only allow a key to be matched to a user if it matches
an activated email address. This PR provides a different mechanism - if
the user provides a signature for automatically generated token (based
on the timestamp, user creation time, user ID, username and primary
email.
* Ensure verified keys can act for all active emails for the user
* Add code to mark keys as verified
* Slight UI adjustments
* Slight UI adjustments 2
* Simplify signature verification slightly
* fix postgres test
* add api routes
* handle swapped primary-keys
* Verify the no-reply address for verified keys
* Only add email addresses that are activated to keys
* Fix committer shortcut properly
* Restructure gpg_keys.go
* Use common Verification Token code
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add checkbox to delete pull branch after successful merge
* Omit DeleteBranchAfterMerge field in json
* Log a warning instead of error when PR head branch deleted
* Add DefaultDeleteBranchAfterMerge to PullRequestConfig
* Add support for delete_branch_after_merge via API
* Fix for API: the branch should be deleted from the HEAD repo
If head and base repo are the same, reuse the already opened ctx.Repo.GitRepo
* Don't delegate to CleanupBranch, only reuse branch deletion code
CleanupBranch contains too much logic that has already been performed by the Merge
* Reuse gitrepo in MergePullRequest
Co-authored-by: Andrew Thornton <art27@cantab.net>
* Handle misencoding of login_source cfg in mssql
Unfortunately due a bug in xorm (see https://gitea.com/xorm/xorm/pulls/1957) updating
loginsources on MSSQL causes them to become corrupted. (#16252)
Whilst waiting for the referenced PR to be merged and to handle the corrupted
loginsources correctly we need to add a wrapper to the `FromDB()` methods to look
for and ignore the misplaced BOMs that have been added.
Fix#16252
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Update models/login_source.go
This PR removes multiple unneeded fields from the `HookTask` struct and adds the two headers `X-Hub-Signature` and `X-Hub-Signature-256`.
## ⚠️ BREAKING ⚠️
* The `Secret` field is no longer passed as part of the payload.
* "Breaking" change (or fix?): The webhook history shows the real called url and not the url registered in the webhook (`deliver.go`@129).
Close#16115Fixes#7788Fixes#11755
Co-authored-by: zeripath <art27@cantab.net>
Now that #16069 is merged, some sites may wish to enforce that users are all public, limited or private, and/or disallow users from becoming private.
This PR adds functionality and settings to constrain a user's ability to change their visibility.
Co-authored-by: zeripath <art27@cantab.net>
You can limit or hide organisations. This pull make it also posible for users
- new strings to translte
- add checkbox to user profile form
- add checkbox to admin user.edit form
- filter explore page user search
- filter api admin and public user searches
- allow admins view "hidden" users
- add app option DEFAULT_USER_VISIBILITY
- rewrite many files to use Visibility field
- check for teams intersection
- fix context output
- right fake 404 if not visible
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Andrew Thornton <art27@cantab.net>
* #14559 Reduce amount of email notifications for WIP draft PR's
don't notify repo watchers of WIP draft PR's
* #13190 Notification when WIP Pull Request is ready for review
* Send email notification to repo watchers when WIP PR is created
* Send ui notification to repo watchers when WIP PR is created
* send specific email notification when PR is marked ready for review
instead of reusing the CreatePullRequest action
* Fix lint error
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* invent ctx.QueryOptionalBool
* [API] ListReleases add draft and pre-release filter
* Add X-Total-Count header
* Add a release to fixtures
* Add TEST for API ListReleases
* Add migrating message
Signed-off-by: Andrew Thornton <art27@cantab.net>
* simplify messenger
Signed-off-by: Andrew Thornton <art27@cantab.net>
* make messenger an interface
Signed-off-by: Andrew Thornton <art27@cantab.net>
* rename
Signed-off-by: Andrew Thornton <art27@cantab.net>
* prepare for merge
Signed-off-by: Andrew Thornton <art27@cantab.net>
* as per tech
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
* Only check access tokens if they are likely to be tokens
Gitea will currently check every if every password is an access token even though
most passwords are not and cannot be access tokens.
By creation access tokens are 40 byte hexadecimal strings therefore only these should
be checked.
Signed-off-by: Andrew Thornton <art27@cantab.net>
When sorting issues by deadline, the deadline of the milestone the issue
is attached to wasn't taken into account.
It have been changed and the nearest deadline is taken into account for
sorting.
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Unfortunately the v180 migration picked up a few non-standalone dependencies. This PR
forcibly copies the important parts back into the migration.
Fix#16150
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Add a new table issue_index to store the max issue index so that issue could be deleted with no duplicated index
* Fix pull index
* Add tests for concurrent creating issues
* Fix lint
* Fix tests
* Fix postgres test
* Add test for migration v180
* Rename wrong test file name
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Always store primary email address into email_address table and also the state
* Add lower_email to not convert email to lower as what's added
* Fix fixture
* Fix tests
* Use BeforeInsert to save lower email
* Fix v180 migration
* fix tests
* Fix test
* Remove wrong submited codes
* Fix test
* Fix test
* Fix test
* Add test for v181 migration
* remove change user's email to lower
* Revert change on user's email column
* Fix lower email
* Fix test
* Fix test
* encrypt migration credentials in task persistence
Not sure this is the best approach, we could encrypt the entire
`PayloadContent` instead. Also instead of clearing individual fields in
payload content, we could just delete the task once it has
(successfully) finished..?
* remove credentials of past migrations
* only run DB migration for completed tasks
* fix binding
* add omitempty
* never serialize unencrypted credentials
* fix import order
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Double the avatar size factor
This results on finer Avatar rendering on Hi-DPI display.
* fix test
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Encrypt LDAP bind password in db with SECRET_KEY
The LDAP source bind password are currently stored in plaintext in the db
This PR simply encrypts them with the setting.SECRET_KEY.
Fix#15460
Signed-off-by: Andrew Thornton <art27@cantab.net>
* remove ui warning regarding unencrypted password
Co-authored-by: silverwind <me@silverwind.io>
* Restore PAM user autocreation functionality
PAM autoregistration of users currently fails due to email invalidity.
This PR adds a new setting to PAM to allow an email domain to be set
or just sets the email to the noreply address and if that fails falls
back to uuid@localhost
Fix#15702
Signed-off-by: Andrew Thornton <art27@cantab.net>
* As per KN4CKER
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Unregister non-matching serviceworkers
With the addition of the /assets url, users who visited a previous
version of the site now may have two active service workers, one with
the old scope `/` and one with scope `/assets`. This check for
serviceworkers that do not match the current script path and unregisters
them.
Also included is a small refactor to publicpath.js which was simplified
because AssetUrlPrefix is always present now. Also it makes use of the
new joinPaths helper too.
Fixes: https://github.com/go-gitea/gitea/pull/15823
Unfortunately some old repositories can have tags with empty Tagger, Commit
or Author. Go-Git variants will always have empty values for these whereas
the native git variant leaves them at nil. The simplest solution is just to
always have these set to empty Signatures.
v156 migration also makes the incorrect assumption that these cannot be empty.
Therefore add some handling to this and add logging and adjust broken
logging elsewhere in this migration.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Decouple TestAction_GetRepoLink and TestSizedAvatarLink.
* Load database for TestCheckGPGUserEmail.
* Load database for TestMakeIDsFromAPIAssigneesToAdd.
* Load database for TestGetUserIDsByNames and TestGetMaileableUsersByIDs.
* Load database for TestUser_ToUser.
* Load database for TestRepository_EditWikiPage.
* Include AppSubURL in test.
* Prevent panic with empty slice.
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Use single shared random string generation function
- Replace 3 functions that do the same with 1 shared one
- Use crypto/rand over math/rand for a stronger RNG
- Output only alphanumerical for URL compatibilty
Fixes: #15536
* use const string method
* Update modules/avatar/avatar.go
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Repositories using external issue tracker tend to use numeric issues in
commits. To prevent conflicts during issue reference parsing or inside
commit hooks, this change respects these configuration and uses the !
character to refer to pull requests in merge commit messages.
For repositories using squash merges, this was already handled.
Signed-off-by: JustusBunsi <61625851+justusbunsi@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>
* Fix setting version table in dump
As noted on Discord there is a problem with gitea dump where the version table
is not being dumped correctly.
This is due to a missing pointer in the TableInfo.
This PR fixes this.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Update models_test.go
* fix some ui bug about draft release
- should not show draft release in tag list because
it will't create real tag
- still show draft release without tag and commit message
for draft release instead of 404 error
- remove tag load for attachement links because it's useless
Signed-off-by: a1012112796 <1012112796@qq.com>
* add test code
* fix test
That's because has added a new release in relaese test database.
* fix dropdown link for draft release
The DB session clean up needs to check expiry not created_unix.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
go panics otherwise with `panic: interface conversion: error is git.ErrNotExist, not *git.ErrNotExist`, thanks to Codeberg/Andi for reporting this.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
It is possible that tag commits could be deleted or missing from repos. This causes
migration 156 to fail and breaks upgrade.
This PR simply logs the failure.
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
If an avatar is requested in a particular size ensure that /avatars also gets the size request
Fix#15453
Signed-off-by: Andrew Thornton <art27@cantab.net>
Some postgres users have logging which logs even failed transactions. So
just query the db before trying to insert.
Fix#15451
Signed-off-by: Andrew Thornton art27@cantab.net
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Refactored handleOAuth2SignIn in routers/user/auth.go
The function handleOAuth2SignIn was called twice but some code path could only
be reached by one of the invocations. Moved the unnecessary code path out of
handleOAuth2SignIn.
* Refactored user creation
There was common code to create a user and display the correct error message.
And after the creation the only user should be an admin and if enabled a
confirmation email should be sent. This common code is now abstracted into
two functions and a helper function to call both.
* Added auto-register for OAuth2 users
If enabled new OAuth2 users will be registered with their OAuth2 details.
The UserID, Name and Email fields from the gothUser are used.
Therefore the OpenID Connect provider needs additional scopes to return
the coresponding claims.
* Added error for missing fields in OAuth2 response
* Linking and auto linking on oauth2 registration
* Set default username source to nickname
* Add automatic oauth2 scopes for github and google
* Add hint to change the openid connect scopes if fields are missing
* Extend info about auto linking security risk
Co-authored-by: Viktor Kuzmin <kvaster@gmail.com>
Signed-off-by: Martin Michaelis <code@mgjm.de>
The Session table must have an Expiry field not a created_unix field - somehow
this migration adds the incorrect named field leading to #15445 reports.
Fix#15445
Signed-off-by: Andrew Thornton <art27@cantab.net>
The issue is that the TestPatch will reset the PR MergeBase - and it is possible for TestPatch to update the MergeBase whilst a merge is ongoing. The ensuing merge will then complete but it doesn't re-set the MergeBase it used to merge the PR.
Fixes the intermittent error in git test.
Signed-off-by: Andrew Thornton art27@cantab.net
* _ to unused func options
* rm useless brakets
* rm trifial non used models functions
* rm dead code
* rm dead global vars
* fix routers/api/v1/repo/issue.go
* dont overload import module
* Implemented LFS client.
* Implemented scanning for pointer files.
* Implemented downloading of lfs files.
* Moved model-dependent code into services.
* Removed models dependency. Added TryReadPointerFromBuffer.
* Migrated code from service to module.
* Centralised storage creation.
* Removed dependency from models.
* Moved ContentStore into modules.
* Share structs between server and client.
* Moved method to services.
* Implemented lfs download on clone.
* Implemented LFS sync on clone and mirror update.
* Added form fields.
* Updated templates.
* Fixed condition.
* Use alternate endpoint.
* Added missing methods.
* Fixed typo and make linter happy.
* Detached pointer parser from gogit dependency.
* Fixed TestGetLFSRange test.
* Added context to support cancellation.
* Use ReadFull to probably read more data.
* Removed duplicated code from models.
* Moved scan implementation into pointer_scanner_nogogit.
* Changed method name.
* Added comments.
* Added more/specific log/error messages.
* Embedded lfs.Pointer into models.LFSMetaObject.
* Moved code from models to module.
* Moved code from models to module.
* Moved code from models to module.
* Reduced pointer usage.
* Embedded type.
* Use promoted fields.
* Fixed unexpected eof.
* Added unit tests.
* Implemented migration of local file paths.
* Show an error on invalid LFS endpoints.
* Hide settings if not used.
* Added LFS info to mirror struct.
* Fixed comment.
* Check LFS endpoint.
* Manage LFS settings from mirror page.
* Fixed selector.
* Adjusted selector.
* Added more tests.
* Added local filesystem migration test.
* Fixed typo.
* Reset settings.
* Added special windows path handling.
* Added unit test for HTTPClient.
* Added unit test for BasicTransferAdapter.
* Moved into util package.
* Test if LFS endpoint is allowed.
* Added support for git://
* Just use a static placeholder as the displayed url may be invalid.
* Reverted to original code.
* Added "Advanced Settings".
* Updated wording.
* Added discovery info link.
* Implemented suggestion.
* Fixed missing format parameter.
* Added Pointer.IsValid().
* Always remove model on error.
* Added suggestions.
* Use channel instead of array.
* Update routers/repo/migrate.go
* fmt
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
* Unexport SendUserMail
* Instead of "[]*models.User" or "[]string" lists infent "[]*MailRecipient" for mailer
* adopt
* code format
* TODOs for "i18n"
* clean
* no fallback for lang -> just use english
* lint
* exec testComposeIssueCommentMessage per lang and use only emails
* rm MailRecipient
* Dont reload from users from db if you alredy have in ram
* nits
* minimize diff
Signed-off-by: 6543 <6543@obermui.de>
* localize subjects
* linter ...
* Tr extend
* start tmpl edit ...
* Apply suggestions from code review
* use translation.Locale
* improve mailIssueCommentBatch
Signed-off-by: Andrew Thornton <art27@cantab.net>
* add i18n to datas
Signed-off-by: Andrew Thornton <art27@cantab.net>
* a comment
Co-authored-by: Andrew Thornton <art27@cantab.net>
/api/v1/repos/issues/search is a highly inefficient search which is unfortunately
the basis for our dependency searching algorithm. In particular it currently loads
all of the repositories and their owners and their primary coding language all of
which is immediately thrown away.
This PR makes one simple change - just get the IDs.
Related #14560
Related #12827
Signed-off-by: Andrew Thornton <art27@cantab.net>
git gc cron could change the size of the repository therefore we should update the
size of the repo stored in our database.
Also significantly improve the efficiency of counting lfs associated with the
repository
* Create Proper Migration tests
Unfortunately our testing regime has so far meant that migrations do not
get proper testing.
This PR begins the process of creating migration tests for this.
* Add test for v176
* fix mssql drop db
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix Migration 176 yet again
Whilst creating a test for v176 in the migrations_test PR
it has become clear that this was still wrong.
This is now fixed. Genuinely.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* and fix repo transfer
Signed-off-by: Andrew Thornton <art27@cantab.net>
There is a serious issue with the v176 migration where there is a mistaken missing
label_id selection.
*introduced by #14912*
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Never add labels not from this repository or organisation and remove org labels on transfer
Prevent the addition of labels from outside of the repository or
organisation and remove organisation labels on transfer.
Related #14908
Signed-off-by: Andrew Thornton <art27@cantab.net>
* switch to use sql
Signed-off-by: Andrew Thornton <art27@cantab.net>
* remove AS
Signed-off-by: Andrew Thornton <art27@cantab.net>
* subquery alias
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Give me some AS?
Signed-off-by: Andrew Thornton <art27@cantab.net>
* double AS
Signed-off-by: Andrew Thornton <art27@cantab.net>
* try try again
Signed-off-by: Andrew Thornton <art27@cantab.net>
* once more around the merry go round
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix api problem
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add outside label consistency check into doctor
This PR adds another consistency check into doctor in order to detect
labels that have been added from outside of repositories and organisations
Fix#14908
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix migration
Signed-off-by: Andrew Thornton <art27@cantab.net>
* prep for merge
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Fix postgres ID sequences broken by recreate-table
Unfortunately there is a subtle problem with recreatetable on postgres which
leads to the sequences not being renamed and not being left at 0.
Fix#14725
Signed-off-by: Andrew Thornton <art27@cantab.net>
* let us try information_schema instead
Signed-off-by: Andrew Thornton <art27@cantab.net>
* try again
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: 6543 <6543@obermui.de>
* Ensure validation occurs on clone addresses too
Fix#14984
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix lint
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix test
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix api tests
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* chore: rewrite format.
* chore: update format
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
* chore: update format
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
* chore: Adjacent parameters with the same type should be grouped together
* chore: update format.
* API: fix set milestone on PR creation
pr creation via API failed with 404, because we searched
for milestoneID 0, due to uninitialized var usage D:
* add tests
* fix expected status codes
* fix tests
Co-authored-by: 6543 <6543@obermui.de>
* Never add labels not from this repository or organisation and remove org labels on transfer
Prevent the addition of labels from outside of the repository or
organisation and remove organisation labels on transfer.
Related #14908
* switch to use sql
* subquery alias
* once more around the merry go round
* fix api problem
Closed milestones and issues should only be marked overdue if they were
closed after their deadline.
Fix: #14536
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Make auto check manual merge as a chooseable mod and add manual merge way on ui
as title, Before this pr, we use same way with GH to check manually merge.
It good, but in some special cases, misjudgments can occur. and it's hard
to fix this bug. So I add option to allow repo manager block "auto check manual merge"
function, Then it will have same style like gitlab(allow empty pr). and to compensate for
not being able to detect THE PR merge automatically, I added a manual approach.
Signed-off-by: a1012112796 <1012112796@qq.com>
* make swager
* api support
* ping ci
* fix TestPullCreate_EmptyChangesWithCommits
* Apply suggestions from code review
Co-authored-by: zeripath <art27@cantab.net>
* Apply review suggestions and add test
* Apply suggestions from code review
Co-authored-by: zeripath <art27@cantab.net>
* fix build
* test error message
* make fmt
* Fix indentation issues identified by @silverwind
Co-authored-by: silverwind <me@silverwind.io>
* Fix tests and make manually merged disabled error on API the same
Signed-off-by: Andrew Thornton <art27@cantab.net>
* a small nit
* fix wrong commit id error
* fix bug
* simple test
* fix test
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Most DBs apart from SQLite will use a default Collation that is not case insensitive.
This means that SearchIssuesByKeyword becomes case sensitive for db indexing - in
contrast to the bleve and elastic indexers.
This PR simply uses UPPER(...) to do the LIKE - and although it may be more efficient
to change collations this would be a non-trivial task.
Fix#13663
Signed-off-by: Andrew Thornton <art27@cantab.net>
* make repo as "pending transfer" if on transfer start doer has no right to create repo in new destination
* if new pending transfer ocured, create UI & Mail notifications
Instead of causing a log.Fatal, we should handle broken OAuth2
providers by disabling them.
Fix#8930
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Create Xorm session provider
This PR creates a Xorm session provider which creates
the appropriate Session table for macaron/session.
Fix#7137
Signed-off-by: Andrew Thornton <art27@cantab.net>
* extraneous l
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix lint
Signed-off-by: Andrew Thornton <art27@cantab.net>
* use key instead of ID to be compatible with go-macaron/session
Signed-off-by: Andrew Thornton <art27@cantab.net>
* And change the migration too.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Update spacing of imports
Co-authored-by: 6543 <6543@obermui.de>
* Update modules/session/xorm.go
Co-authored-by: techknowlogick <matti@mdranta.net>
* add xorm provider to the virtual provider
Signed-off-by: Andrew Thornton <art27@cantab.net>
* prep for master merge
* prep for merge master
* As per @lunny
* move migration out of the way
* Move to call this db session as per @lunny
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: techknowlogick <matti@mdranta.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Prevent adding nil label to .AddedLabels or .RemovedLabels
There are possibly a few old databases out there with malmigrated data that can
cause panics with empty labels being migrated.
This PR adds a few tests to prevent nil labels being added.
Fix#14466
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Add doctor command to remove the broken label comments
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
* Fix GPG key deletion when user is deleted
Per #14531, deleting a user account will delete the user's GPG keys
from the `gpg_key` table but not from `gpg_key_import`, which causes
an error when creating an account with the same email and attempting
to re-add the same key. This commit deletes all entries from
`gpg_key_import` that match any GPG key IDs belonging to the user.
* Format added code in models/user.go
* Create a new function for listing GPG keys and apply it
Create a new function `listGPGKeys` and replace a previous use
of `ListGPGKeys`. Thanks to @6543 for the patch.
Co-authored-by: Anton Khimich <anton.khimicha@mail.utoronto.ca>
Co-authored-by: 6543 <6543@obermui.de>
Migrations currently uses the default Xorm mapper which is
not the same as the mapper Gitea actually uses.
This means that there is a difference between the struct
parsing and mapping to database tables in migrations as
compared to normal Sync2.
This was the cause for the catastrophic problem in v168 -
untagged fields are not mapped in the same way in migrations
as compared to outside of migrations.
This is also likely the cause of some weird subtle failures
in other migrations as any untagged field may not be being
mapped exactly the same way.
This PR suggests that we ensure that the mapper is set at
the start of the migrations code - but also enforces a strict
clean mapper between each migration.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix mig 141
* Add Migration to fix it
* update null values to false first
* Alter Table if posible
* use dropTableColumns instead of recreateTable
* MySQL use Alter
* Postgres use Alter
* Update models/migrations/v167.go
* Apply suggestions from code review
* use 2x add col & 2x update & 2x drop col
* let sqlite be the only issue
* use recreate since it just WORKS
Close **Prune hook_task Table (#10741)**
Added a cron job to delete webhook deliveries in the hook_task table. It can be turned on/off and the schedule controlled globally via app.ini. The data can be deleted by either the age of the delivery which is the default or by deleting the all but the most recent deliveries _per webhook_.
Note: I had previously submitted pr #11416 but I closed it when I realized that I had deleted per repository instead of per webhook. Also, I decided allowing the settings to be overridden via the ui was overkill. Also this version allows the deletion by age which is probably what most people would want.
* Add redirect for user
* Add redirect for orgs
* Add user redirect test
* Appease linter
* Add comment to DeleteUserRedirect function
* Fix locale changes
* Fix GetUserByParams
* Fix orgAssignment
* Remove debug logging
* Add redirect prompt
* Dont Export DeleteUserRedirect & only use it within a session
* Unexport newUserRedirect
* cleanup
* Fix & Dedub API code
* Format Template
* Add Migration & rm dublicat
* Refactor: unexport newRepoRedirect() & rm dedub del exec
* if this fails we'll need to re-rename the user directory
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* refactor models.DeleteComment and delete related reactions too
* use deleteComment for UserDeleteWithCommentsMaxDays in DeleteUser
* nits
* Use time.Duration as other time settings have
* docs
* Resolve Fixme & fix potential deadlock
* Disabled by Default
* Update Config Value Description
* switch args
* Update models/issue_comment.go
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
* add notification about running stopwatch to header
* serialize seconds, duration in stopwatches api
* ajax update stopwatch
i should get my testenv working locally...
* new variant: hover dialog
* noscript compatibility
* js: live-update stopwatch time
* js live update robustness
* Add support for ed25519_sk and ecdsa_sk SSH keys
These start with sk-ssh-ed25519@openssh.com and sk-ecdsa-sha2-nistp256@openssh.com.
They are supported in recent versions of go x/crypto/ssh and OpenSSH 8.2
or higher.
* skip ssh-keygen
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Andrew Thornton <art27@cantab.net>
* Implement ghost comment mitigation
Adds a config option USER_DELETE_WITH_COMMENTS_MAX_DAYS to the [service] section. See https://codeberg.org/Codeberg/Discussion/issues/24 for the underlying issue.
* cleanup
* use setting module correctly
* add to docs
Co-authored-by: Moritz Marquardt <git@momar.de>
* Add review requested filter on pull request overview #13682
fix formatting
* add review_requested filter to /repos/issues/search API endpoint
* only Approve and Reject status should supersede Request status
* add support for team reviews
* refactor: remove duplication of issue filtering conditions
* move SaltGeneration into HashPasswort and rename it to what it does
* Migration: Where Password is Valid with Empty String delete it
* prohibit empty password hash
* let SetPassword("") unset pwd stuff
Fixed#8861
* use ajax on PR review page
* handle review comments
* extract duplicate code
FetchCodeCommentsByLine was initially more or less copied from fetchCodeCommentsByReview. Now they both use a common findCodeComments function instead
* use the Engine that was passed into the method
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Fix wrong type on hooktask to convert typ from char(16) to varchar(16)
* Fix bugs
* Improve code
* Use different trim function for MSSQL
* Fix bug
* Removed wrong changed line
* Removed wrong changed line
* Fix nullable
* Fix lint
* Ignore sqlite on migration
* Fix mssql modify column failure
* Move modifyColumn to migrations.go so that other migrate function could use it
* Added MirrorInterval to the API
* Remove MirrorInterval from CreateRepository
* Removed Duplicate UpdateMirror Function
* Updated Error Logging
* Update Log Message for is not Mirror
Co-authored-by: 6543 <6543@obermui.de>
* Delete Debug Statement that snuck in
Co-authored-by: zeripath <art27@cantab.net>
* Add Check for If Interval is too small
* Output to API Call
* Add Error Object when time is Less than Min Interval
* Frequency Error Message
Co-authored-by: zeripath <art27@cantab.net>
* Allow Zero Mirror Interval
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Fixes#14187: mention handling extracted from email notification code
Fixes#14013: add notification for mentions in pull request code comments
Fixes#13450: Not receiving any emails with setting "Only Email on Mention"
* Ensure that schema search path is set with every connection on postgres
Unfortunately every connection to postgres requires that the search path is
set appropriately.
This PR shadows the postgres driver to ensure that as soon as a connection
is open, the search_path is set appropriately.
Fix#14088
Signed-off-by: Andrew Thornton <art27@cantab.net>
* no golangci-lint that is not a helpful suggestion
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Use Execer if available
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
This is "minimal" in the sense that only the Authorization Code Flow
from OpenID Connect Core is implemented. No discovery, no configuration
endpoint, and no user scope management.
OpenID Connect is an extension to the (already implemented) OAuth 2.0
protocol, and essentially an `id_token` JWT is added to the access token
endpoint response when using the Authorization Code Flow. I also added
support for the "nonce" field since it is required to be used in the
id_token if the client decides to include it in its initial request.
In order to enable this extension an OAuth 2.0 scope containing
"openid" is needed. Other OAuth 2.0 requests should not be impacted by
this change.
This minimal implementation is enough to enable single sign-on (SSO)
for other sites, e.g. by using something like `mod_auth_openidc` to
only allow access to a CI server if a user has logged into Gitea.
Fixes: #1310
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
* Change topic name size from 25 to 50
* recreateTable requires full bean definition
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
* Disable SSH key addition and deletion when externally managed
When a user has a login source which has SSH key management
key addition and deletion using the UI should be disabled.
Fix#13983
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Make only externally managed keys disabled
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* remove github.com/unknwon/com from models
* dont use "com.ToStr()"
* replace "com.ToStr" with "fmt.Sprint" where its easy to do
* more refactor
* fix test
* just "proxy" Copy func for now
* as per @lunny
* now uses the same permission model as for the activity feed:
only include activities in repos, that the doer has access to.
this might be somewhat slower.
* also improves handling of user.KeepActivityPrivate (still shows
the heatmap to self & admins)
* extend tests
* adjust integration test to new behaviour
* add access to actions for admins
* extend heatmap unit tests
* Show dropdown with all statuses for commit
* Use popups
* Remove unnecessary change
* Style popup
* Use divided list
* As per @silverwind
* Refactor GetLastCommitStatus
* Missing dropdown on repo home and commit page
* Fix tests
* Make status icon be a part of a link on PR list
* Fix missing translation call
* Indent fix
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* Trim the branch prefix from action.GetBranch
#13882 has revealed that the refname of an action is actually only a
refname pattern and necessarily a branch. For examplem pushing to
refs/heads/master will result in action with refname refs/heads/master
but pushing to master will result in a refname master.
The simplest solution to providing a fix here is to trim the prefix
therefore this PR proposes this.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Update models/action.go
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
The frontpage uses a rather strange method to obtain the commit's avatar
which I've overlooked earlier. I don't exactly understand how it works
but this change fixes the wrong default avatars by using the function
that was in previous use.
Also introduced a few constants for size an size increase factor.
Fixes: https://github.com/go-gitea/gitea/issues/13844
* Direct avatar rendering
This adds new template helpers for avatar rendering which output image
elements with direct links to avatars which makes them cacheable by the
browsers.
This should be a major performance improvment for pages with many avatars.
* fix avatars of other user's profile pages
* fix top border on user avatar name
* uncircle avatars
* remove old incomplete avatar selector
* use title attribute for name and add it back on blame
* minor refactor
* tweak comments
* fix url path join and adjust test to new result
* dedupe functions
* add black list and white list support for migrating repositories
* fix fmt
* fix lint
* fix vendor
* fix modules.txt
* clean diff
* specify log message
* use blocklist/allowlist
* allways use lowercase to match url
* Apply allow/block
* Settings: use existing "migrations" section
* convert domains lower case
* dont store unused value
* Block private addresses for migration by default
* fix lint
* use proposed-upstream func to detect private IP addr
* a nit
* add own error for blocked migration, add tests, imprufe api
* fix test
* fix-if-localhost-is-ipv4
* rename error & error message
* rename setting options
* Apply suggestions from code review
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
* ui: show 'owner' tag for real owner
Signed-off-by: a1012112796 <1012112796@qq.com>
* Update custom/conf/app.example.ini
* simplify logic
fix logic
fix a small bug about original author
* remove system manager tag
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
* Add time filter for issue search
* Add limit option for paggination
* Add Filter for: Created by User, Assigned to User, Mentioning User
* update swagger
* Add Tests for limit, before & since
* Improve error feedback for duplicate deploy keys
Instead of a generic HTTP 500 error page, a flash message is rendered
with the deploy key page template so inform the user that a key with the
intended title already exists.
* API returns 422 error when key with name exists
* Add email validity checking
Add email validity checking for the following routes:
[Web interface]
1. User registration
2. User creation by admin
3. Adding an email through user settings
[API]
1. POST /admin/users
2. PATCH /admin/users/:username
3. POST /user/emails
* Add further tests
* Add signup email tests
* Add email validity check for linking existing account
* Address PR comments
* Remove unneeded DB session
* Move email check to updateUser
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
When migrating repositories with reactions with deleted users, the original
author id may be -1. This means that it is possible to end up attempting
to create multiple reactions with the same [ Type, IssueID, CommentID, UserID,
OriginalAuthorID ] thus breaking the constraints.
On SQLite this appears to cause a deadlock but on other dbs this will
cause the migration to fail.
This PR extends the constraint to include the original author username
in the constraint.
Fix#13271
Signed-off-by: Andrew Thornton <art27@cantab.net>
* When replying to an outdated comment it should not appear on the files page
This happened because the comment took the latest commitID as its base instead of the
reviewID that it was replying to.
There was also no way of creating an already outdated comment - and a
reply to a review on an outdated line should be outdated.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix test
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Fix broken migration
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix mssql
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Create temporary table because ... well MSSQL ...
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Create temporary table because ... well MSSQL ...
Signed-off-by: Andrew Thornton <art27@cantab.net>
* Create temporary table because ... well MSSQL ...
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix mssql
Signed-off-by: Andrew Thornton <art27@cantab.net>
* move session within the batch
Signed-off-by: Andrew Thornton <art27@cantab.net>
* regen the sqlcmd each time round the loop
Signed-off-by: Andrew Thornton <art27@cantab.net>
* as per @lunny
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
* Make archival asynchronous
The prime benefit being sought here is for large archives to not
clog up the rendering process and cause unsightly proxy timeouts.
As a secondary benefit, archive-in-progress is moved out of the
way into a /tmp file so that new archival requests for the same
commit will not get fulfilled based on an archive that isn't yet
finished.
This asynchronous system is fairly primitive; request comes in, we'll
spawn off a new goroutine to handle it, then we'll mark it as done.
Status requests will see if the file exists in the final location,
and report the archival as done when it exists.
Fixes#11265
* Archive links: drop initial delay to three-quarters of a second
Some, or perhaps even most, archives will not take all that long to archive.
The archive process starts as soon as the download button is initially
clicked, so in theory they could be done quite quickly. Drop the initial
delay down to three-quarters of a second to make it more responsive in the
common case of the archive being quickly created.
* archiver: restructure a little bit to facilitate testing
This introduces two sync.Cond pointers to the archiver package. If they're
non-nil when we go to process a request, we'll wait until signalled (at all)
to proceed. The tests will then create the sync.Cond so that it can signal
at-will and sanity-check the state of the queue at different phases.
The author believes that nil-checking these two sync.Cond pointers on every
archive processing will introduce minimal overhead with no impact on
maintainability.
* gofmt nit: no space around binary + operator
* services: archiver: appease golangci-lint, lock queueMutex
Locking/unlocking the queueMutex is allowed, but not required, for
Cond.Signal() and Cond.Broadcast(). The magic at play here is just a little
too much for golangci-lint, as we take the address of queueMutex and this is
mostly used in archiver.go; the variable still gets flagged as unused.
* archiver: tests: fix several timing nits
Once we've signaled a cond var, it may take some small amount of time for
the goroutines released to hit the spot we're wanting them to be at. Give
them an appropriate amount of time.
* archiver: tests: no underscore in var name, ungh
* archiver: tests: Test* is run in a separate context than TestMain
We must setup the mutex/cond variables at the beginning of any test that's
going to use it, or else these will be nil when the test is actually ran.
* archiver: tests: hopefully final tweak
Things got shuffled around such that we carefully build up and release
requests from the queue, so we can validate the state of the queue at each
step. Fix some assertions that no longer hold true as fallout.
* repo: Download: restore some semblance of previous behavior
When archival was made async, the GET endpoint was only useful if a previous
POST had initiated the download. This commit restores the previous behavior,
to an extent; we'll now submit the archive request there and return a
"202 Accepted" to indicate that it's processing if we didn't manage to
complete the request within ~2 seconds of submission.
This lets a client directly GET the archive, and gives them some indication
that they may attempt to GET it again at a later time.
* archiver: tests: simplify a bit further
We don't need to risk failure and use time.ParseDuration to get 2 *
time.Second.
else if isn't really necessary if the conditions are simple enough and lead
to the same result.
* archiver: tests: resolve potential source of flakiness
Increase all timeouts to 10 seconds; these aren't hard-coded sleeps, so
there's no guarantee we'll actually take that long. If we need longer to
not have a false-positive, then so be it.
While here, various assert.{Not,}Equal arguments are flipped around so that
the wording in error output reflects reality, where the expected argument is
second and actual third.
* archiver: setup infrastructure for notifying consumers of completion
This API will *not* allow consumers to subscribe to specific requests being
completed, just *any* request being completed. The caller is responsible for
determining if their request is satisfied and waiting again if needed.
* repo: archive: make GET endpoint synchronous again
If the request isn't complete, this endpoint will now submit the request and
wait for completion using the new API. This may still be susceptible to
timeouts for larger repos, but other endpoints now exist that the web
interface will use to negotiate its way through larger archive processes.
* archiver: tests: amend test to include WaitForCompletion()
This is a trivial one, so go ahead and include it.
* archiver: tests: fix test by calling NewContext()
The mutex is otherwise uninitialized, so we need to ensure that we're
actually initializing it if we plan to test it.
* archiver: tests: integrate new WaitForCompletion a little better
We can use this to wait for archives to come in, rather than spinning and
hoping with a timeout.
* archiver: tests: combine numQueued declaration with next-instruction assignment
* routers: repo: reap unused archiving flag from DownloadStatus()
This had some planned usage before, indicating whether this request
initiated the archival process or not. After several rounds of refactoring,
this use was deemed not necessary for much of anything and got boiled down
to !complete in all cases.
* services: archiver: restructure to use a channel
We now offer two forms of waiting for a request:
- WaitForCompletion: wait for completion with no timeout
- TimedWaitForCompletion: wait for completion with timeout
In both cases, we wait for the given request's cchan to close; in the latter
case, we do so with the caller-provided timeout. This completely removes the
need for busy-wait loops in Download/InitiateDownload, as it's fairly clean
to wait on a channel with timeout.
* services: archiver: use defer to unlock now that we can
This previously carried the lock into the goroutine, but an intermediate
step just added the request to archiveInProgress outside of the new
goroutine and removed the need for the goroutine to start out with it.
* Revert "archiver: tests: combine numQueued declaration with next-instruction assignment"
This reverts commit bcc5214023.
Revert "archiver: tests: integrate new WaitForCompletion a little better"
This reverts commit 9fc8bedb56.
Revert "archiver: tests: fix test by calling NewContext()"
This reverts commit 709c35685e.
Revert "archiver: tests: amend test to include WaitForCompletion()"
This reverts commit 75261f56bc.
* archiver: tests: first attempt at WaitForCompletion() tests
* archiver: tests: slight improvement, less busy-loop
Just wait for the requests to complete in order, instead of busy-waiting
with a timeout. This is slightly less fragile.
While here, reverse the arguments of a nearby assert.Equal() so that
expected/actual are correct in any test output.
* archiver: address lint nits
* services: archiver: only close the channel once
* services: archiver: use a struct{} for the wait channel
This makes it obvious that the channel is only being used as a signal,
rather than anything useful being piped through it.
* archiver: tests: fix expectations
Move the close of the channel into doArchive() itself; notably, before these
goroutines move on to waiting on the Release cond.
The tests are adjusted to reflect that we can't WaitForCompletion() after
they've already completed, as WaitForCompletion() doesn't indicate that
they've been released from the queue yet.
* archiver: tests: set cchan to nil for comparison
* archiver: move ctx.Error's back into the route handlers
We shouldn't be setting this in a service, we should just be validating the
request that we were handed.
* services: archiver: use regex to match a hash
This makes sure we don't try and use refName as a hash when it's clearly not
one, e.g. heads/pull/foo.
* routers: repo: remove the weird /archive/status endpoint
We don't need to do this anymore, we can just continue POSTing to the
archive/* endpoint until we're told the download's complete. This avoids a
potential naming conflict, where a ref could start with "status/"
* archiver: tests: bump reasonable timeout to 15s
* archiver: tests: actually release timedReq
* archiver: tests: run through inFlight instead of manually checking
While we're here, add a test for manually re-processing an archive that's
already been complete. Re-open the channel and mark it incomplete, so that
doArchive can just mark it complete again.
* initArchiveLinks: prevent default behavior from clicking
* archiver: alias gitea's context, golang context import pending
* archiver: simplify logic, just reconstruct slices
While the previous logic was perhaps slightly more efficient, the
new variant's readability is much improved.
* archiver: don't block shutdown on waiting for archive
The technique established launches a goroutine to do the wait,
which will close a wait channel upon termination. For the timeout
case, we also send back a value indicating whether the timeout was
hit or not.
The timeouts are expected to be relatively small, but still a multi-
second delay to shutdown due to this could be unfortunate.
* archiver: simplify shutdown logic
We can just grab the shutdown channel from the graceful manager instead of
constructing a channel to halt the caller and/or pass a result back.
* Style issues
* Fix mis-merge
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
* When replying to an outdated comment it should not appear on the files page
This happened because the comment took the latest commitID as its base instead of the
reviewID that it was replying to.
There was also no way of creating an already outdated comment - and a
reply to a review on an outdated line should be outdated.
Signed-off-by: Andrew Thornton <art27@cantab.net>
* fix test
Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>