close #27031
If the rpm package does not contain a matching gpg signature, the
installation will fail. See (#27031) , now auto-signing rpm uploads.
This option is turned off by default for compatibility.
If the assign the pull request review to a team, it did not show the
members of the team in the "requested_reviewers" field, so the field was
null. As a solution, I added the team members to the array.
fix#31764
Fix#31657.
According to the
[doc](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onschedule)
of GitHub Actions, The timezone for cron should be UTC, not the local
timezone. And Gitea Actions doesn't have any reasons to change this, so
I think it's a bug.
However, Gitea Actions has extended the syntax, as it supports
descriptors like `@weekly` and `@every 5m`, and supports specifying the
timezone like `TZ=UTC 0 10 * * *`. So we can make it use UTC only when
the timezone is not specified, to be compatible with GitHub Actions, and
also respect the user's specified.
It does break the feature because the times to run tasks would be
changed, and it may confuse users. So I don't think we should backport
this.
## ⚠️ BREAKING ⚠️
If the server's local time zone is not UTC, a scheduled task would run
at a different time after upgrading Gitea to this version.
Fix#31707.
Also related to #31715.
Some Actions resources could has different types of ownership. It could
be:
- global: all repos and orgs/users can use it.
- org/user level: only the org/user can use it.
- repo level: only the repo can use it.
There are two ways to distinguish org/user level from repo level:
1. `{owner_id: 1, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
2. `{owner_id: 0, repo_id: 2}` for repo level, and `{owner_id: 1,
repo_id: 0}` for org level.
The first way seems more reasonable, but it may not be true. The point
is that although a resource, like a runner, belongs to a repo (it can be
used by the repo), the runner doesn't belong to the repo's org (other
repos in the same org cannot use the runner). So, the second method
makes more sense.
And the first way is not user-friendly to query, we must set the repo id
to zero to avoid wrong results.
So, #31715 should be right. And the most simple way to fix#31707 is
just:
```diff
- shared.GetRegistrationToken(ctx, ctx.Repo.Repository.OwnerID, ctx.Repo.Repository.ID)
+ shared.GetRegistrationToken(ctx, 0, ctx.Repo.Repository.ID)
```
However, it is quite intuitive to set both owner id and repo id since
the repo belongs to the owner. So I prefer to be compatible with it. If
we get both owner id and repo id not zero when creating or finding, it's
very clear that the caller want one with repo level, but set owner id
accidentally. So it's OK to accept it but fix the owner id to zero.
Fix#31137.
Replace #31623#31697.
When migrating LFS objects, if there's any object that failed (like some
objects are losted, which is not really critical), Gitea will stop
migrating LFS immediately but treat the migration as successful.
This PR checks the error according to the [LFS api
doc](https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md#successful-responses).
> LFS object error codes should match HTTP status codes where possible:
>
> - 404 - The object does not exist on the server.
> - 409 - The specified hash algorithm disagrees with the server's
acceptable options.
> - 410 - The object was removed by the owner.
> - 422 - Validation error.
If the error is `404`, it's safe to ignore it and continue migration.
Otherwise, stop the migration and mark it as failed to ensure data
integrity of LFS objects.
And maybe we should also ignore others errors (maybe `410`? I'm not sure
what's the difference between "does not exist" and "removed by the
owner".), we can add it later when some users report that they have
failed to migrate LFS because of an error which should be ignored.
There's already `initActionsTasks`; it will avoid additional check for
if Actions enabled to move `registerActionsCleanup` into it.
And we don't really need `OlderThanConfig`.
- Change condition to include `RepoID` equal to 0 for organization
secrets
---------
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
Fix#26685
If a commit status comes from Gitea Actions and the user cannot access
the repo's actions unit (the user does not have the permission or the
actions unit is disabled), a 404 page will occur after clicking the
"Details" link. We should hide the "Details" link in this case.
<img
src="https://github.com/go-gitea/gitea/assets/15528715/68361714-b784-4bb5-baab-efde4221f466"
width="400px" />
Document return type for the endpoints that fetch specific files from a
repository. This allows the swagger generated code to read the returned
data.
Co-authored-by: Giteabot <teabot@gitea.io>
We don't need to have polyfills down to Node v4. Some of our deps have
polyfills, and don't utilize the built-in implementation if available.
While this does decrease our package graph, I haven't been able to
notice any decrease/increase in page load times, although that could
likely be just because it's already pretty fast.
Nolyfill is https://github.com/SukkaW/nolyfill
updates to files generated with:
```shell
npx nolyfill install
npm update
```
Before this is/isn't merged, I'd be appreciative/thankful for other's
insights.
Edit: This isn't due to a specific individual. I am generally supportive
of them and their dedication to backward compatibility. This PR is due
to not needing those imports for our minimum requirements. Please don't
take this PR as commentary on anyone's character.
---------
Co-authored-by: silverwind <me@silverwind.io>
See discussion on #31561 for some background.
The introspect endpoint was using the OIDC token itself for
authentication. This fixes it to use basic authentication with the
client ID and secret instead:
* Applications with a valid client ID and secret should be able to
successfully introspect an invalid token, receiving a 200 response
with JSON data that indicates the token is invalid
* Requests with an invalid client ID and secret should not be able
to introspect, even if the token itself is valid
Unlike #31561 (which just future-proofed the current behavior against
future changes to `DISABLE_QUERY_AUTH_TOKEN`), this is a potential
compatibility break (some introspection requests without valid client
IDs that would previously succeed will now fail). Affected deployments
must begin sending a valid HTTP basic authentication header with their
introspection requests, with the username set to a valid client ID and
the password set to the corresponding client secret.