Create new type and remove un-used code
Some checks failed
Create Release & Upload Assets / Upload Assets To Gitea w/ goreleaser (push) Failing after 11s
Some checks failed
Create Release & Upload Assets / Upload Assets To Gitea w/ goreleaser (push) Failing after 11s
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -95,7 +95,6 @@ dkms.conf
|
||||
*.dll
|
||||
|
||||
# Fortran module files
|
||||
*.mod
|
||||
*.smod
|
||||
|
||||
# Compiled Static libraries
|
||||
@@ -110,8 +109,6 @@ dkms.conf
|
||||
*.app
|
||||
|
||||
# ---> Laravel
|
||||
/vendor/
|
||||
node_modules/
|
||||
npm-debug.log
|
||||
yarn-error.log
|
||||
|
||||
|
||||
2
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
2
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
/toml.test
|
||||
/toml-test
|
||||
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 TOML authors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
120
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
120
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
|
||||
reflection interface similar to Go's standard library `json` and `xml` packages.
|
||||
|
||||
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
|
||||
|
||||
Documentation: https://godocs.io/github.com/BurntSushi/toml
|
||||
|
||||
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
|
||||
changelog; this information is also in the git tag annotations (e.g. `git show
|
||||
v0.4.0`).
|
||||
|
||||
This library requires Go 1.18 or newer; add it to your go.mod with:
|
||||
|
||||
% go get github.com/BurntSushi/toml@latest
|
||||
|
||||
It also comes with a TOML validator CLI tool:
|
||||
|
||||
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
|
||||
% tomlv some-toml-file.toml
|
||||
|
||||
### Examples
|
||||
For the simplest example, consider some TOML file as just a list of keys and
|
||||
values:
|
||||
|
||||
```toml
|
||||
Age = 25
|
||||
Cats = [ "Cauchy", "Plato" ]
|
||||
Pi = 3.14
|
||||
Perfection = [ 6, 28, 496, 8128 ]
|
||||
DOB = 1987-07-05T05:45:00Z
|
||||
```
|
||||
|
||||
Which can be decoded with:
|
||||
|
||||
```go
|
||||
type Config struct {
|
||||
Age int
|
||||
Cats []string
|
||||
Pi float64
|
||||
Perfection []int
|
||||
DOB time.Time
|
||||
}
|
||||
|
||||
var conf Config
|
||||
_, err := toml.Decode(tomlData, &conf)
|
||||
```
|
||||
|
||||
You can also use struct tags if your struct field name doesn't map to a TOML key
|
||||
value directly:
|
||||
|
||||
```toml
|
||||
some_key_NAME = "wat"
|
||||
```
|
||||
|
||||
```go
|
||||
type TOML struct {
|
||||
ObscureKey string `toml:"some_key_NAME"`
|
||||
}
|
||||
```
|
||||
|
||||
Beware that like other decoders **only exported fields** are considered when
|
||||
encoding and decoding; private fields are silently ignored.
|
||||
|
||||
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
|
||||
Here's an example that automatically parses values in a `mail.Address`:
|
||||
|
||||
```toml
|
||||
contacts = [
|
||||
"Donald Duck <donald@duckburg.com>",
|
||||
"Scrooge McDuck <scrooge@duckburg.com>",
|
||||
]
|
||||
```
|
||||
|
||||
Can be decoded with:
|
||||
|
||||
```go
|
||||
// Create address type which satisfies the encoding.TextUnmarshaler interface.
|
||||
type address struct {
|
||||
*mail.Address
|
||||
}
|
||||
|
||||
func (a *address) UnmarshalText(text []byte) error {
|
||||
var err error
|
||||
a.Address, err = mail.ParseAddress(string(text))
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode it.
|
||||
func decode() {
|
||||
blob := `
|
||||
contacts = [
|
||||
"Donald Duck <donald@duckburg.com>",
|
||||
"Scrooge McDuck <scrooge@duckburg.com>",
|
||||
]
|
||||
`
|
||||
|
||||
var contacts struct {
|
||||
Contacts []address
|
||||
}
|
||||
|
||||
_, err := toml.Decode(blob, &contacts)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, c := range contacts.Contacts {
|
||||
fmt.Printf("%#v\n", c.Address)
|
||||
}
|
||||
|
||||
// Output:
|
||||
// &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
|
||||
// &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
|
||||
}
|
||||
```
|
||||
|
||||
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
|
||||
a similar way.
|
||||
|
||||
### More complex usage
|
||||
See the [`_example/`](/_example) directory for a more complex example.
|
||||
613
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
613
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
@@ -0,0 +1,613 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"math"
|
||||
"os"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Unmarshaler is the interface implemented by objects that can unmarshal a
|
||||
// TOML description of themselves.
|
||||
type Unmarshaler interface {
|
||||
UnmarshalTOML(any) error
|
||||
}
|
||||
|
||||
// Unmarshal decodes the contents of data in TOML format into a pointer v.
|
||||
//
|
||||
// See [Decoder] for a description of the decoding process.
|
||||
func Unmarshal(data []byte, v any) error {
|
||||
_, err := NewDecoder(bytes.NewReader(data)).Decode(v)
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode the TOML data in to the pointer v.
|
||||
//
|
||||
// See [Decoder] for a description of the decoding process.
|
||||
func Decode(data string, v any) (MetaData, error) {
|
||||
return NewDecoder(strings.NewReader(data)).Decode(v)
|
||||
}
|
||||
|
||||
// DecodeFile reads the contents of a file and decodes it with [Decode].
|
||||
func DecodeFile(path string, v any) (MetaData, error) {
|
||||
fp, err := os.Open(path)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
defer fp.Close()
|
||||
return NewDecoder(fp).Decode(v)
|
||||
}
|
||||
|
||||
// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
|
||||
// [Decode].
|
||||
func DecodeFS(fsys fs.FS, path string, v any) (MetaData, error) {
|
||||
fp, err := fsys.Open(path)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
defer fp.Close()
|
||||
return NewDecoder(fp).Decode(v)
|
||||
}
|
||||
|
||||
// Primitive is a TOML value that hasn't been decoded into a Go value.
|
||||
//
|
||||
// This type can be used for any value, which will cause decoding to be delayed.
|
||||
// You can use [PrimitiveDecode] to "manually" decode these values.
|
||||
//
|
||||
// NOTE: The underlying representation of a `Primitive` value is subject to
|
||||
// change. Do not rely on it.
|
||||
//
|
||||
// NOTE: Primitive values are still parsed, so using them will only avoid the
|
||||
// overhead of reflection. They can be useful when you don't know the exact type
|
||||
// of TOML data until runtime.
|
||||
type Primitive struct {
|
||||
undecoded any
|
||||
context Key
|
||||
}
|
||||
|
||||
// The significand precision for float32 and float64 is 24 and 53 bits; this is
|
||||
// the range a natural number can be stored in a float without loss of data.
|
||||
const (
|
||||
maxSafeFloat32Int = 16777215 // 2^24-1
|
||||
maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
|
||||
)
|
||||
|
||||
// Decoder decodes TOML data.
|
||||
//
|
||||
// TOML tables correspond to Go structs or maps; they can be used
|
||||
// interchangeably, but structs offer better type safety.
|
||||
//
|
||||
// TOML table arrays correspond to either a slice of structs or a slice of maps.
|
||||
//
|
||||
// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
|
||||
// local timezone.
|
||||
//
|
||||
// [time.Duration] types are treated as nanoseconds if the TOML value is an
|
||||
// integer, or they're parsed with time.ParseDuration() if they're strings.
|
||||
//
|
||||
// All other TOML types (float, string, int, bool and array) correspond to the
|
||||
// obvious Go types.
|
||||
//
|
||||
// An exception to the above rules is if a type implements the TextUnmarshaler
|
||||
// interface, in which case any primitive TOML value (floats, strings, integers,
|
||||
// booleans, datetimes) will be converted to a []byte and given to the value's
|
||||
// UnmarshalText method. See the Unmarshaler example for a demonstration with
|
||||
// email addresses.
|
||||
//
|
||||
// # Key mapping
|
||||
//
|
||||
// TOML keys can map to either keys in a Go map or field names in a Go struct.
|
||||
// The special `toml` struct tag can be used to map TOML keys to struct fields
|
||||
// that don't match the key name exactly (see the example). A case insensitive
|
||||
// match to struct names will be tried if an exact match can't be found.
|
||||
//
|
||||
// The mapping between TOML values and Go values is loose. That is, there may
|
||||
// exist TOML values that cannot be placed into your representation, and there
|
||||
// may be parts of your representation that do not correspond to TOML values.
|
||||
// This loose mapping can be made stricter by using the IsDefined and/or
|
||||
// Undecoded methods on the MetaData returned.
|
||||
//
|
||||
// This decoder does not handle cyclic types. Decode will not terminate if a
|
||||
// cyclic type is passed.
|
||||
type Decoder struct {
|
||||
r io.Reader
|
||||
}
|
||||
|
||||
// NewDecoder creates a new Decoder.
|
||||
func NewDecoder(r io.Reader) *Decoder {
|
||||
return &Decoder{r: r}
|
||||
}
|
||||
|
||||
var (
|
||||
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
|
||||
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
|
||||
primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
|
||||
)
|
||||
|
||||
// Decode TOML data in to the pointer `v`.
|
||||
func (dec *Decoder) Decode(v any) (MetaData, error) {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.Kind() != reflect.Ptr {
|
||||
s := "%q"
|
||||
if reflect.TypeOf(v) == nil {
|
||||
s = "%v"
|
||||
}
|
||||
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
|
||||
}
|
||||
if rv.IsNil() {
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
|
||||
}
|
||||
|
||||
// Check if this is a supported type: struct, map, any, or something that
|
||||
// implements UnmarshalTOML or UnmarshalText.
|
||||
rv = indirect(rv)
|
||||
rt := rv.Type()
|
||||
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
|
||||
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
|
||||
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
|
||||
}
|
||||
|
||||
// TODO: parser should read from io.Reader? Or at the very least, make it
|
||||
// read from []byte rather than string
|
||||
data, err := io.ReadAll(dec.r)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
|
||||
p, err := parse(string(data))
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
|
||||
md := MetaData{
|
||||
mapping: p.mapping,
|
||||
keyInfo: p.keyInfo,
|
||||
keys: p.ordered,
|
||||
decoded: make(map[string]struct{}, len(p.ordered)),
|
||||
context: nil,
|
||||
data: data,
|
||||
}
|
||||
return md, md.unify(p.mapping, rv)
|
||||
}
|
||||
|
||||
// PrimitiveDecode is just like the other Decode* functions, except it decodes a
|
||||
// TOML value that has already been parsed. Valid primitive values can *only* be
|
||||
// obtained from values filled by the decoder functions, including this method.
|
||||
// (i.e., v may contain more [Primitive] values.)
|
||||
//
|
||||
// Meta data for primitive values is included in the meta data returned by the
|
||||
// Decode* functions with one exception: keys returned by the Undecoded method
|
||||
// will only reflect keys that were decoded. Namely, any keys hidden behind a
|
||||
// Primitive will be considered undecoded. Executing this method will update the
|
||||
// undecoded keys in the meta data. (See the example.)
|
||||
func (md *MetaData) PrimitiveDecode(primValue Primitive, v any) error {
|
||||
md.context = primValue.context
|
||||
defer func() { md.context = nil }()
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// unify performs a sort of type unification based on the structure of `rv`,
|
||||
// which is the client representation.
|
||||
//
|
||||
// Any type mismatch produces an error. Finding a type that we don't know
|
||||
// how to handle produces an unsupported type error.
|
||||
func (md *MetaData) unify(data any, rv reflect.Value) error {
|
||||
// Special case. Look for a `Primitive` value.
|
||||
// TODO: #76 would make this superfluous after implemented.
|
||||
if rv.Type() == primitiveType {
|
||||
// Save the undecoded data and the key context into the primitive
|
||||
// value.
|
||||
context := make(Key, len(md.context))
|
||||
copy(context, md.context)
|
||||
rv.Set(reflect.ValueOf(Primitive{
|
||||
undecoded: data,
|
||||
context: context,
|
||||
}))
|
||||
return nil
|
||||
}
|
||||
|
||||
rvi := rv.Interface()
|
||||
if v, ok := rvi.(Unmarshaler); ok {
|
||||
err := v.UnmarshalTOML(data)
|
||||
if err != nil {
|
||||
return md.parseErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if v, ok := rvi.(encoding.TextUnmarshaler); ok {
|
||||
return md.unifyText(data, v)
|
||||
}
|
||||
|
||||
// TODO:
|
||||
// The behavior here is incorrect whenever a Go type satisfies the
|
||||
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
|
||||
// array. In particular, the unmarshaler should only be applied to primitive
|
||||
// TOML values. But at this point, it will be applied to all kinds of values
|
||||
// and produce an incorrect error whenever those values are hashes or arrays
|
||||
// (including arrays of tables).
|
||||
|
||||
k := rv.Kind()
|
||||
|
||||
if k >= reflect.Int && k <= reflect.Uint64 {
|
||||
return md.unifyInt(data, rv)
|
||||
}
|
||||
switch k {
|
||||
case reflect.Struct:
|
||||
return md.unifyStruct(data, rv)
|
||||
case reflect.Map:
|
||||
return md.unifyMap(data, rv)
|
||||
case reflect.Array:
|
||||
return md.unifyArray(data, rv)
|
||||
case reflect.Slice:
|
||||
return md.unifySlice(data, rv)
|
||||
case reflect.String:
|
||||
return md.unifyString(data, rv)
|
||||
case reflect.Bool:
|
||||
return md.unifyBool(data, rv)
|
||||
case reflect.Interface:
|
||||
if rv.NumMethod() > 0 { /// Only empty interfaces are supported.
|
||||
return md.e("unsupported type %s", rv.Type())
|
||||
}
|
||||
return md.unifyAnything(data, rv)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return md.unifyFloat64(data, rv)
|
||||
}
|
||||
return md.e("unsupported type %s", rv.Kind())
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyStruct(mapping any, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]any)
|
||||
if !ok {
|
||||
if mapping == nil {
|
||||
return nil
|
||||
}
|
||||
return md.e("type mismatch for %s: expected table but found %s", rv.Type().String(), fmtType(mapping))
|
||||
}
|
||||
|
||||
for key, datum := range tmap {
|
||||
var f *field
|
||||
fields := cachedTypeFields(rv.Type())
|
||||
for i := range fields {
|
||||
ff := &fields[i]
|
||||
if ff.name == key {
|
||||
f = ff
|
||||
break
|
||||
}
|
||||
if f == nil && strings.EqualFold(ff.name, key) {
|
||||
f = ff
|
||||
}
|
||||
}
|
||||
if f != nil {
|
||||
subv := rv
|
||||
for _, i := range f.index {
|
||||
subv = indirect(subv.Field(i))
|
||||
}
|
||||
|
||||
if isUnifiable(subv) {
|
||||
md.decoded[md.context.add(key).String()] = struct{}{}
|
||||
md.context = append(md.context, key)
|
||||
|
||||
err := md.unify(datum, subv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
} else if f.name != "" {
|
||||
return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyMap(mapping any, rv reflect.Value) error {
|
||||
keyType := rv.Type().Key().Kind()
|
||||
if keyType != reflect.String && keyType != reflect.Interface {
|
||||
return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
|
||||
keyType, rv.Type())
|
||||
}
|
||||
|
||||
tmap, ok := mapping.(map[string]any)
|
||||
if !ok {
|
||||
if tmap == nil {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("map", mapping)
|
||||
}
|
||||
if rv.IsNil() {
|
||||
rv.Set(reflect.MakeMap(rv.Type()))
|
||||
}
|
||||
for k, v := range tmap {
|
||||
md.decoded[md.context.add(k).String()] = struct{}{}
|
||||
md.context = append(md.context, k)
|
||||
|
||||
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
|
||||
|
||||
err := md.unify(v, indirect(rvval))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
|
||||
rvkey := indirect(reflect.New(rv.Type().Key()))
|
||||
|
||||
switch keyType {
|
||||
case reflect.Interface:
|
||||
rvkey.Set(reflect.ValueOf(k))
|
||||
case reflect.String:
|
||||
rvkey.SetString(k)
|
||||
}
|
||||
|
||||
rv.SetMapIndex(rvkey, rvval)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyArray(data any, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("slice", data)
|
||||
}
|
||||
if l := datav.Len(); l != rv.Len() {
|
||||
return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
|
||||
}
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySlice(data any, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("slice", data)
|
||||
}
|
||||
n := datav.Len()
|
||||
if rv.IsNil() || rv.Cap() < n {
|
||||
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
|
||||
}
|
||||
rv.SetLen(n)
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
|
||||
l := data.Len()
|
||||
for i := 0; i < l; i++ {
|
||||
err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyString(data any, rv reflect.Value) error {
|
||||
_, ok := rv.Interface().(json.Number)
|
||||
if ok {
|
||||
if i, ok := data.(int64); ok {
|
||||
rv.SetString(strconv.FormatInt(i, 10))
|
||||
} else if f, ok := data.(float64); ok {
|
||||
rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
|
||||
} else {
|
||||
return md.badtype("string", data)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if s, ok := data.(string); ok {
|
||||
rv.SetString(s)
|
||||
return nil
|
||||
}
|
||||
return md.badtype("string", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyFloat64(data any, rv reflect.Value) error {
|
||||
rvk := rv.Kind()
|
||||
|
||||
if num, ok := data.(float64); ok {
|
||||
switch rvk {
|
||||
case reflect.Float32:
|
||||
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
rv.SetFloat(num)
|
||||
default:
|
||||
panic("bug")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if num, ok := data.(int64); ok {
|
||||
if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
|
||||
(rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
|
||||
return md.parseErr(errUnsafeFloat{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetFloat(float64(num))
|
||||
return nil
|
||||
}
|
||||
|
||||
return md.badtype("float", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyInt(data any, rv reflect.Value) error {
|
||||
_, ok := rv.Interface().(time.Duration)
|
||||
if ok {
|
||||
// Parse as string duration, and fall back to regular integer parsing
|
||||
// (as nanosecond) if this is not a string.
|
||||
if s, ok := data.(string); ok {
|
||||
dur, err := time.ParseDuration(s)
|
||||
if err != nil {
|
||||
return md.parseErr(errParseDuration{s})
|
||||
}
|
||||
rv.SetInt(int64(dur))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
num, ok := data.(int64)
|
||||
if !ok {
|
||||
return md.badtype("integer", data)
|
||||
}
|
||||
|
||||
rvk := rv.Kind()
|
||||
switch {
|
||||
case rvk >= reflect.Int && rvk <= reflect.Int64:
|
||||
if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
|
||||
(rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
|
||||
(rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetInt(num)
|
||||
case rvk >= reflect.Uint && rvk <= reflect.Uint64:
|
||||
unum := uint64(num)
|
||||
if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
|
||||
rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
|
||||
rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetUint(unum)
|
||||
default:
|
||||
panic("unreachable")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyBool(data any, rv reflect.Value) error {
|
||||
if b, ok := data.(bool); ok {
|
||||
rv.SetBool(b)
|
||||
return nil
|
||||
}
|
||||
return md.badtype("boolean", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyAnything(data any, rv reflect.Value) error {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyText(data any, v encoding.TextUnmarshaler) error {
|
||||
var s string
|
||||
switch sdata := data.(type) {
|
||||
case Marshaler:
|
||||
text, err := sdata.MarshalTOML()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case encoding.TextMarshaler:
|
||||
text, err := sdata.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case fmt.Stringer:
|
||||
s = sdata.String()
|
||||
case string:
|
||||
s = sdata
|
||||
case bool:
|
||||
s = fmt.Sprintf("%v", sdata)
|
||||
case int64:
|
||||
s = fmt.Sprintf("%d", sdata)
|
||||
case float64:
|
||||
s = fmt.Sprintf("%f", sdata)
|
||||
default:
|
||||
return md.badtype("primitive (string-like)", data)
|
||||
}
|
||||
if err := v.UnmarshalText([]byte(s)); err != nil {
|
||||
return md.parseErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) badtype(dst string, data any) error {
|
||||
return md.e("incompatible types: TOML value has type %s; destination has type %s", fmtType(data), dst)
|
||||
}
|
||||
|
||||
func (md *MetaData) parseErr(err error) error {
|
||||
k := md.context.String()
|
||||
return ParseError{
|
||||
LastKey: k,
|
||||
Position: md.keyInfo[k].pos,
|
||||
Line: md.keyInfo[k].pos.Line,
|
||||
err: err,
|
||||
input: string(md.data),
|
||||
}
|
||||
}
|
||||
|
||||
func (md *MetaData) e(format string, args ...any) error {
|
||||
f := "toml: "
|
||||
if len(md.context) > 0 {
|
||||
f = fmt.Sprintf("toml: (last key %q): ", md.context)
|
||||
p := md.keyInfo[md.context.String()].pos
|
||||
if p.Line > 0 {
|
||||
f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
|
||||
}
|
||||
}
|
||||
return fmt.Errorf(f+format, args...)
|
||||
}
|
||||
|
||||
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
|
||||
func rvalue(v any) reflect.Value {
|
||||
return indirect(reflect.ValueOf(v))
|
||||
}
|
||||
|
||||
// indirect returns the value pointed to by a pointer.
|
||||
//
|
||||
// Pointers are followed until the value is not a pointer. New values are
|
||||
// allocated for each nil pointer.
|
||||
//
|
||||
// An exception to this rule is if the value satisfies an interface of interest
|
||||
// to us (like encoding.TextUnmarshaler).
|
||||
func indirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr {
|
||||
if v.CanSet() {
|
||||
pv := v.Addr()
|
||||
pvi := pv.Interface()
|
||||
if _, ok := pvi.(encoding.TextUnmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
if _, ok := pvi.(Unmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
if v.IsNil() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
}
|
||||
return indirect(reflect.Indirect(v))
|
||||
}
|
||||
|
||||
func isUnifiable(rv reflect.Value) bool {
|
||||
if rv.CanSet() {
|
||||
return true
|
||||
}
|
||||
rvi := rv.Interface()
|
||||
if _, ok := rvi.(encoding.TextUnmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
if _, ok := rvi.(Unmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// fmt %T with "interface {}" replaced with "any", which is far more readable.
|
||||
func fmtType(t any) string {
|
||||
return strings.ReplaceAll(fmt.Sprintf("%T", t), "interface {}", "any")
|
||||
}
|
||||
29
vendor/github.com/BurntSushi/toml/deprecated.go
generated
vendored
Normal file
29
vendor/github.com/BurntSushi/toml/deprecated.go
generated
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"encoding"
|
||||
"io"
|
||||
)
|
||||
|
||||
// TextMarshaler is an alias for encoding.TextMarshaler.
|
||||
//
|
||||
// Deprecated: use encoding.TextMarshaler
|
||||
type TextMarshaler encoding.TextMarshaler
|
||||
|
||||
// TextUnmarshaler is an alias for encoding.TextUnmarshaler.
|
||||
//
|
||||
// Deprecated: use encoding.TextUnmarshaler
|
||||
type TextUnmarshaler encoding.TextUnmarshaler
|
||||
|
||||
// DecodeReader is an alias for NewDecoder(r).Decode(v).
|
||||
//
|
||||
// Deprecated: use NewDecoder(reader).Decode(&value).
|
||||
func DecodeReader(r io.Reader, v any) (MetaData, error) { return NewDecoder(r).Decode(v) }
|
||||
|
||||
// PrimitiveDecode is an alias for MetaData.PrimitiveDecode().
|
||||
//
|
||||
// Deprecated: use MetaData.PrimitiveDecode.
|
||||
func PrimitiveDecode(primValue Primitive, v any) error {
|
||||
md := MetaData{decoded: make(map[string]struct{})}
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
8
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
8
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
// Package toml implements decoding and encoding of TOML files.
|
||||
//
|
||||
// This package supports TOML v1.0.0, as specified at https://toml.io
|
||||
//
|
||||
// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
|
||||
// and can be used to verify if TOML document is valid. It can also be used to
|
||||
// print the type of each key.
|
||||
package toml
|
||||
778
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
778
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
@@ -0,0 +1,778 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml/internal"
|
||||
)
|
||||
|
||||
type tomlEncodeError struct{ error }
|
||||
|
||||
var (
|
||||
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
|
||||
errNonString = errors.New("toml: cannot encode a map with non-string key type")
|
||||
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
|
||||
errAnything = errors.New("") // used in testing
|
||||
)
|
||||
|
||||
var dblQuotedReplacer = strings.NewReplacer(
|
||||
"\"", "\\\"",
|
||||
"\\", "\\\\",
|
||||
"\x00", `\u0000`,
|
||||
"\x01", `\u0001`,
|
||||
"\x02", `\u0002`,
|
||||
"\x03", `\u0003`,
|
||||
"\x04", `\u0004`,
|
||||
"\x05", `\u0005`,
|
||||
"\x06", `\u0006`,
|
||||
"\x07", `\u0007`,
|
||||
"\b", `\b`,
|
||||
"\t", `\t`,
|
||||
"\n", `\n`,
|
||||
"\x0b", `\u000b`,
|
||||
"\f", `\f`,
|
||||
"\r", `\r`,
|
||||
"\x0e", `\u000e`,
|
||||
"\x0f", `\u000f`,
|
||||
"\x10", `\u0010`,
|
||||
"\x11", `\u0011`,
|
||||
"\x12", `\u0012`,
|
||||
"\x13", `\u0013`,
|
||||
"\x14", `\u0014`,
|
||||
"\x15", `\u0015`,
|
||||
"\x16", `\u0016`,
|
||||
"\x17", `\u0017`,
|
||||
"\x18", `\u0018`,
|
||||
"\x19", `\u0019`,
|
||||
"\x1a", `\u001a`,
|
||||
"\x1b", `\u001b`,
|
||||
"\x1c", `\u001c`,
|
||||
"\x1d", `\u001d`,
|
||||
"\x1e", `\u001e`,
|
||||
"\x1f", `\u001f`,
|
||||
"\x7f", `\u007f`,
|
||||
)
|
||||
|
||||
var (
|
||||
marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
|
||||
marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
|
||||
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
|
||||
)
|
||||
|
||||
// Marshaler is the interface implemented by types that can marshal themselves
|
||||
// into valid TOML.
|
||||
type Marshaler interface {
|
||||
MarshalTOML() ([]byte, error)
|
||||
}
|
||||
|
||||
// Marshal returns a TOML representation of the Go value.
|
||||
//
|
||||
// See [Encoder] for a description of the encoding process.
|
||||
func Marshal(v any) ([]byte, error) {
|
||||
buff := new(bytes.Buffer)
|
||||
if err := NewEncoder(buff).Encode(v); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return buff.Bytes(), nil
|
||||
}
|
||||
|
||||
// Encoder encodes a Go to a TOML document.
|
||||
//
|
||||
// The mapping between Go values and TOML values should be precisely the same as
|
||||
// for [Decode].
|
||||
//
|
||||
// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
|
||||
// representation.
|
||||
//
|
||||
// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
|
||||
// encoding the value as custom TOML.
|
||||
//
|
||||
// If you want to write arbitrary binary data then you will need to use
|
||||
// something like base64 since TOML does not have any binary types.
|
||||
//
|
||||
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
|
||||
// are encoded first.
|
||||
//
|
||||
// Go maps will be sorted alphabetically by key for deterministic output.
|
||||
//
|
||||
// The toml struct tag can be used to provide the key name; if omitted the
|
||||
// struct field name will be used. If the "omitempty" option is present the
|
||||
// following value will be skipped:
|
||||
//
|
||||
// - arrays, slices, maps, and string with len of 0
|
||||
// - struct with all zero values
|
||||
// - bool false
|
||||
//
|
||||
// If omitzero is given all int and float types with a value of 0 will be
|
||||
// skipped.
|
||||
//
|
||||
// Encoding Go values without a corresponding TOML representation will return an
|
||||
// error. Examples of this includes maps with non-string keys, slices with nil
|
||||
// elements, embedded non-struct types, and nested slices containing maps or
|
||||
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
|
||||
// is okay, as is []map[string][]string).
|
||||
//
|
||||
// NOTE: only exported keys are encoded due to the use of reflection. Unexported
|
||||
// keys are silently discarded.
|
||||
type Encoder struct {
|
||||
Indent string // string for a single indentation level; default is two spaces.
|
||||
hasWritten bool // written any output to w yet?
|
||||
w *bufio.Writer
|
||||
}
|
||||
|
||||
// NewEncoder create a new Encoder.
|
||||
func NewEncoder(w io.Writer) *Encoder {
|
||||
return &Encoder{w: bufio.NewWriter(w), Indent: " "}
|
||||
}
|
||||
|
||||
// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
|
||||
//
|
||||
// An error is returned if the value given cannot be encoded to a valid TOML
|
||||
// document.
|
||||
func (enc *Encoder) Encode(v any) error {
|
||||
rv := eindirect(reflect.ValueOf(v))
|
||||
err := enc.safeEncode(Key([]string{}), rv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return enc.w.Flush()
|
||||
}
|
||||
|
||||
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if terr, ok := r.(tomlEncodeError); ok {
|
||||
err = terr.error
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
enc.encode(key, rv)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (enc *Encoder) encode(key Key, rv reflect.Value) {
|
||||
// If we can marshal the type to text, then we use that. This prevents the
|
||||
// encoder for handling these types as generic structs (or whatever the
|
||||
// underlying type of a TextMarshaler is).
|
||||
switch {
|
||||
case isMarshaler(rv):
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
return
|
||||
case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
|
||||
enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
|
||||
return
|
||||
}
|
||||
|
||||
k := rv.Kind()
|
||||
switch k {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64,
|
||||
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
|
||||
enc.eArrayOfTables(key, rv)
|
||||
} else {
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
}
|
||||
case reflect.Interface:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Map:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.eTable(key, rv)
|
||||
case reflect.Ptr:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Struct:
|
||||
enc.eTable(key, rv)
|
||||
default:
|
||||
encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
|
||||
}
|
||||
}
|
||||
|
||||
// eElement encodes any value that can be an array element.
|
||||
func (enc *Encoder) eElement(rv reflect.Value) {
|
||||
switch v := rv.Interface().(type) {
|
||||
case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
|
||||
format := time.RFC3339Nano
|
||||
switch v.Location() {
|
||||
case internal.LocalDatetime:
|
||||
format = "2006-01-02T15:04:05.999999999"
|
||||
case internal.LocalDate:
|
||||
format = "2006-01-02"
|
||||
case internal.LocalTime:
|
||||
format = "15:04:05.999999999"
|
||||
}
|
||||
switch v.Location() {
|
||||
default:
|
||||
enc.wf(v.Format(format))
|
||||
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
|
||||
enc.wf(v.In(time.UTC).Format(format))
|
||||
}
|
||||
return
|
||||
case Marshaler:
|
||||
s, err := v.MarshalTOML()
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
if s == nil {
|
||||
encPanic(errors.New("MarshalTOML returned nil and no error"))
|
||||
}
|
||||
enc.w.Write(s)
|
||||
return
|
||||
case encoding.TextMarshaler:
|
||||
s, err := v.MarshalText()
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
if s == nil {
|
||||
encPanic(errors.New("MarshalText returned nil and no error"))
|
||||
}
|
||||
enc.writeQuoted(string(s))
|
||||
return
|
||||
case time.Duration:
|
||||
enc.writeQuoted(v.String())
|
||||
return
|
||||
case json.Number:
|
||||
n, _ := rv.Interface().(json.Number)
|
||||
|
||||
if n == "" { /// Useful zero value.
|
||||
enc.w.WriteByte('0')
|
||||
return
|
||||
} else if v, err := n.Int64(); err == nil {
|
||||
enc.eElement(reflect.ValueOf(v))
|
||||
return
|
||||
} else if v, err := n.Float64(); err == nil {
|
||||
enc.eElement(reflect.ValueOf(v))
|
||||
return
|
||||
}
|
||||
encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
|
||||
}
|
||||
|
||||
switch rv.Kind() {
|
||||
case reflect.Ptr:
|
||||
enc.eElement(rv.Elem())
|
||||
return
|
||||
case reflect.String:
|
||||
enc.writeQuoted(rv.String())
|
||||
case reflect.Bool:
|
||||
enc.wf(strconv.FormatBool(rv.Bool()))
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
enc.wf(strconv.FormatInt(rv.Int(), 10))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
enc.wf(strconv.FormatUint(rv.Uint(), 10))
|
||||
case reflect.Float32:
|
||||
f := rv.Float()
|
||||
if math.IsNaN(f) {
|
||||
if math.Signbit(f) {
|
||||
enc.wf("-")
|
||||
}
|
||||
enc.wf("nan")
|
||||
} else if math.IsInf(f, 0) {
|
||||
if math.Signbit(f) {
|
||||
enc.wf("-")
|
||||
}
|
||||
enc.wf("inf")
|
||||
} else {
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
|
||||
}
|
||||
case reflect.Float64:
|
||||
f := rv.Float()
|
||||
if math.IsNaN(f) {
|
||||
if math.Signbit(f) {
|
||||
enc.wf("-")
|
||||
}
|
||||
enc.wf("nan")
|
||||
} else if math.IsInf(f, 0) {
|
||||
if math.Signbit(f) {
|
||||
enc.wf("-")
|
||||
}
|
||||
enc.wf("inf")
|
||||
} else {
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
|
||||
}
|
||||
case reflect.Array, reflect.Slice:
|
||||
enc.eArrayOrSliceElement(rv)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(nil, rv, true)
|
||||
case reflect.Map:
|
||||
enc.eMap(nil, rv, true)
|
||||
case reflect.Interface:
|
||||
enc.eElement(rv.Elem())
|
||||
default:
|
||||
encPanic(fmt.Errorf("unexpected type: %s", fmtType(rv.Interface())))
|
||||
}
|
||||
}
|
||||
|
||||
// By the TOML spec, all floats must have a decimal with at least one number on
|
||||
// either side.
|
||||
func floatAddDecimal(fstr string) string {
|
||||
if !strings.Contains(fstr, ".") {
|
||||
return fstr + ".0"
|
||||
}
|
||||
return fstr
|
||||
}
|
||||
|
||||
func (enc *Encoder) writeQuoted(s string) {
|
||||
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
|
||||
length := rv.Len()
|
||||
enc.wf("[")
|
||||
for i := 0; i < length; i++ {
|
||||
elem := eindirect(rv.Index(i))
|
||||
enc.eElement(elem)
|
||||
if i != length-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
}
|
||||
enc.wf("]")
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
trv := eindirect(rv.Index(i))
|
||||
if isNil(trv) {
|
||||
continue
|
||||
}
|
||||
enc.newline()
|
||||
enc.wf("%s[[%s]]", enc.indentStr(key), key)
|
||||
enc.newline()
|
||||
enc.eMapOrStruct(key, trv, false)
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
|
||||
if len(key) == 1 {
|
||||
// Output an extra newline between top-level tables.
|
||||
// (The newline isn't written if nothing else has been written though.)
|
||||
enc.newline()
|
||||
}
|
||||
if len(key) > 0 {
|
||||
enc.wf("%s[%s]", enc.indentStr(key), key)
|
||||
enc.newline()
|
||||
}
|
||||
enc.eMapOrStruct(key, rv, false)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
|
||||
switch rv.Kind() {
|
||||
case reflect.Map:
|
||||
enc.eMap(key, rv, inline)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(key, rv, inline)
|
||||
default:
|
||||
// Should never happen?
|
||||
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
|
||||
rt := rv.Type()
|
||||
if rt.Key().Kind() != reflect.String {
|
||||
encPanic(errNonString)
|
||||
}
|
||||
|
||||
// Sort keys so that we have deterministic output. And write keys directly
|
||||
// underneath this key first, before writing sub-structs or sub-maps.
|
||||
var mapKeysDirect, mapKeysSub []string
|
||||
for _, mapKey := range rv.MapKeys() {
|
||||
k := mapKey.String()
|
||||
if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
|
||||
mapKeysSub = append(mapKeysSub, k)
|
||||
} else {
|
||||
mapKeysDirect = append(mapKeysDirect, k)
|
||||
}
|
||||
}
|
||||
|
||||
var writeMapKeys = func(mapKeys []string, trailC bool) {
|
||||
sort.Strings(mapKeys)
|
||||
for i, mapKey := range mapKeys {
|
||||
val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
|
||||
if isNil(val) {
|
||||
continue
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.writeKeyValue(Key{mapKey}, val, true)
|
||||
if trailC || i != len(mapKeys)-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
} else {
|
||||
enc.encode(key.add(mapKey), val)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.wf("{")
|
||||
}
|
||||
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
|
||||
writeMapKeys(mapKeysSub, false)
|
||||
if inline {
|
||||
enc.wf("}")
|
||||
}
|
||||
}
|
||||
|
||||
const is32Bit = (32 << (^uint(0) >> 63)) == 32
|
||||
|
||||
func pointerTo(t reflect.Type) reflect.Type {
|
||||
if t.Kind() == reflect.Ptr {
|
||||
return pointerTo(t.Elem())
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
|
||||
// Write keys for fields directly under this key first, because if we write
|
||||
// a field that creates a new table then all keys under it will be in that
|
||||
// table (not the one we're writing here).
|
||||
//
|
||||
// Fields is a [][]int: for fieldsDirect this always has one entry (the
|
||||
// struct index). For fieldsSub it contains two entries: the parent field
|
||||
// index from tv, and the field indexes for the fields of the sub.
|
||||
var (
|
||||
rt = rv.Type()
|
||||
fieldsDirect, fieldsSub [][]int
|
||||
addFields func(rt reflect.Type, rv reflect.Value, start []int)
|
||||
)
|
||||
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
|
||||
for i := 0; i < rt.NumField(); i++ {
|
||||
f := rt.Field(i)
|
||||
isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
|
||||
if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
|
||||
continue
|
||||
}
|
||||
opts := getOptions(f.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
|
||||
frv := eindirect(rv.Field(i))
|
||||
|
||||
if is32Bit {
|
||||
// Copy so it works correct on 32bit archs; not clear why this
|
||||
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
|
||||
// This also works fine on 64bit, but 32bit archs are somewhat
|
||||
// rare and this is a wee bit faster.
|
||||
copyStart := make([]int, len(start))
|
||||
copy(copyStart, start)
|
||||
start = copyStart
|
||||
}
|
||||
|
||||
// Treat anonymous struct fields with tag names as though they are
|
||||
// not anonymous, like encoding/json does.
|
||||
//
|
||||
// Non-struct anonymous fields use the normal encoding logic.
|
||||
if isEmbed {
|
||||
if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
|
||||
addFields(frv.Type(), frv, append(start, f.Index...))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if typeIsTable(tomlTypeOfGo(frv)) {
|
||||
fieldsSub = append(fieldsSub, append(start, f.Index...))
|
||||
} else {
|
||||
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
|
||||
}
|
||||
}
|
||||
}
|
||||
addFields(rt, rv, nil)
|
||||
|
||||
writeFields := func(fields [][]int) {
|
||||
for _, fieldIndex := range fields {
|
||||
fieldType := rt.FieldByIndex(fieldIndex)
|
||||
fieldVal := rv.FieldByIndex(fieldIndex)
|
||||
|
||||
opts := getOptions(fieldType.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
if opts.omitempty && isEmpty(fieldVal) {
|
||||
continue
|
||||
}
|
||||
|
||||
fieldVal = eindirect(fieldVal)
|
||||
|
||||
if isNil(fieldVal) { /// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
|
||||
keyName := fieldType.Name
|
||||
if opts.name != "" {
|
||||
keyName = opts.name
|
||||
}
|
||||
|
||||
if opts.omitzero && isZero(fieldVal) {
|
||||
continue
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.writeKeyValue(Key{keyName}, fieldVal, true)
|
||||
if fieldIndex[0] != len(fields)-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
} else {
|
||||
enc.encode(key.add(keyName), fieldVal)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.wf("{")
|
||||
}
|
||||
writeFields(fieldsDirect)
|
||||
writeFields(fieldsSub)
|
||||
if inline {
|
||||
enc.wf("}")
|
||||
}
|
||||
}
|
||||
|
||||
// tomlTypeOfGo returns the TOML type name of the Go value's type.
|
||||
//
|
||||
// It is used to determine whether the types of array elements are mixed (which
|
||||
// is forbidden). If the Go value is nil, then it is illegal for it to be an
|
||||
// array element, and valueIsNil is returned as true.
|
||||
//
|
||||
// The type may be `nil`, which means no concrete TOML type could be found.
|
||||
func tomlTypeOfGo(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if rv.Kind() == reflect.Struct {
|
||||
if rv.Type() == timeType {
|
||||
return tomlDatetime
|
||||
}
|
||||
if isMarshaler(rv) {
|
||||
return tomlString
|
||||
}
|
||||
return tomlHash
|
||||
}
|
||||
|
||||
if isMarshaler(rv) {
|
||||
return tomlString
|
||||
}
|
||||
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
return tomlBool
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64:
|
||||
return tomlInteger
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return tomlFloat
|
||||
case reflect.Array, reflect.Slice:
|
||||
if isTableArray(rv) {
|
||||
return tomlArrayHash
|
||||
}
|
||||
return tomlArray
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return tomlTypeOfGo(rv.Elem())
|
||||
case reflect.String:
|
||||
return tomlString
|
||||
case reflect.Map:
|
||||
return tomlHash
|
||||
default:
|
||||
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
func isMarshaler(rv reflect.Value) bool {
|
||||
return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
|
||||
}
|
||||
|
||||
// isTableArray reports if all entries in the array or slice are a table.
|
||||
func isTableArray(arr reflect.Value) bool {
|
||||
if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
ret := true
|
||||
for i := 0; i < arr.Len(); i++ {
|
||||
tt := tomlTypeOfGo(eindirect(arr.Index(i)))
|
||||
// Don't allow nil.
|
||||
if tt == nil {
|
||||
encPanic(errArrayNilElement)
|
||||
}
|
||||
|
||||
if ret && !typeEqual(tomlHash, tt) {
|
||||
ret = false
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
type tagOptions struct {
|
||||
skip bool // "-"
|
||||
name string
|
||||
omitempty bool
|
||||
omitzero bool
|
||||
}
|
||||
|
||||
func getOptions(tag reflect.StructTag) tagOptions {
|
||||
t := tag.Get("toml")
|
||||
if t == "-" {
|
||||
return tagOptions{skip: true}
|
||||
}
|
||||
var opts tagOptions
|
||||
parts := strings.Split(t, ",")
|
||||
opts.name = parts[0]
|
||||
for _, s := range parts[1:] {
|
||||
switch s {
|
||||
case "omitempty":
|
||||
opts.omitempty = true
|
||||
case "omitzero":
|
||||
opts.omitzero = true
|
||||
}
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
func isZero(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return rv.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return rv.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return rv.Float() == 0.0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isEmpty(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
|
||||
return rv.Len() == 0
|
||||
case reflect.Struct:
|
||||
if rv.Type().Comparable() {
|
||||
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
|
||||
}
|
||||
// Need to also check if all the fields are empty, otherwise something
|
||||
// like this with uncomparable types will always return true:
|
||||
//
|
||||
// type a struct{ field b }
|
||||
// type b struct{ s []string }
|
||||
// s := a{field: b{s: []string{"AAA"}}}
|
||||
for i := 0; i < rv.NumField(); i++ {
|
||||
if !isEmpty(rv.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case reflect.Bool:
|
||||
return !rv.Bool()
|
||||
case reflect.Ptr:
|
||||
return rv.IsNil()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (enc *Encoder) newline() {
|
||||
if enc.hasWritten {
|
||||
enc.wf("\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Write a key/value pair:
|
||||
//
|
||||
// key = <any value>
|
||||
//
|
||||
// This is also used for "k = v" in inline tables; so something like this will
|
||||
// be written in three calls:
|
||||
//
|
||||
// ┌───────────────────┐
|
||||
// │ ┌───┐ ┌────┐│
|
||||
// v v v v vv
|
||||
// key = {k = 1, k2 = 2}
|
||||
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
|
||||
/// Marshaler used on top-level document; call eElement() to just call
|
||||
/// Marshal{TOML,Text}.
|
||||
if len(key) == 0 {
|
||||
enc.eElement(val)
|
||||
return
|
||||
}
|
||||
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
|
||||
enc.eElement(val)
|
||||
if !inline {
|
||||
enc.newline()
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) wf(format string, v ...any) {
|
||||
_, err := fmt.Fprintf(enc.w, format, v...)
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
enc.hasWritten = true
|
||||
}
|
||||
|
||||
func (enc *Encoder) indentStr(key Key) string {
|
||||
return strings.Repeat(enc.Indent, len(key)-1)
|
||||
}
|
||||
|
||||
func encPanic(err error) {
|
||||
panic(tomlEncodeError{err})
|
||||
}
|
||||
|
||||
// Resolve any level of pointers to the actual value (e.g. **string → string).
|
||||
func eindirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
|
||||
if isMarshaler(v) {
|
||||
return v
|
||||
}
|
||||
if v.CanAddr() { /// Special case for marshalers; see #358.
|
||||
if pv := v.Addr(); isMarshaler(pv) {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
if v.IsNil() {
|
||||
return v
|
||||
}
|
||||
|
||||
return eindirect(v.Elem())
|
||||
}
|
||||
|
||||
func isNil(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return rv.IsNil()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
356
vendor/github.com/BurntSushi/toml/error.go
generated
vendored
Normal file
356
vendor/github.com/BurntSushi/toml/error.go
generated
vendored
Normal file
@@ -0,0 +1,356 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ParseError is returned when there is an error parsing the TOML syntax such as
|
||||
// invalid syntax, duplicate keys, etc.
|
||||
//
|
||||
// In addition to the error message itself, you can also print detailed location
|
||||
// information with context by using [ErrorWithPosition]:
|
||||
//
|
||||
// toml: error: Key 'fruit' was already created and cannot be used as an array.
|
||||
//
|
||||
// At line 4, column 2-7:
|
||||
//
|
||||
// 2 | fruit = []
|
||||
// 3 |
|
||||
// 4 | [[fruit]] # Not allowed
|
||||
// ^^^^^
|
||||
//
|
||||
// [ErrorWithUsage] can be used to print the above with some more detailed usage
|
||||
// guidance:
|
||||
//
|
||||
// toml: error: newlines not allowed within inline tables
|
||||
//
|
||||
// At line 1, column 18:
|
||||
//
|
||||
// 1 | x = [{ key = 42 #
|
||||
// ^
|
||||
//
|
||||
// Error help:
|
||||
//
|
||||
// Inline tables must always be on a single line:
|
||||
//
|
||||
// table = {key = 42, second = 43}
|
||||
//
|
||||
// It is invalid to split them over multiple lines like so:
|
||||
//
|
||||
// # INVALID
|
||||
// table = {
|
||||
// key = 42,
|
||||
// second = 43
|
||||
// }
|
||||
//
|
||||
// Use regular for this:
|
||||
//
|
||||
// [table]
|
||||
// key = 42
|
||||
// second = 43
|
||||
type ParseError struct {
|
||||
Message string // Short technical message.
|
||||
Usage string // Longer message with usage guidance; may be blank.
|
||||
Position Position // Position of the error
|
||||
LastKey string // Last parsed key, may be blank.
|
||||
|
||||
// Line the error occurred.
|
||||
//
|
||||
// Deprecated: use [Position].
|
||||
Line int
|
||||
|
||||
err error
|
||||
input string
|
||||
}
|
||||
|
||||
// Position of an error.
|
||||
type Position struct {
|
||||
Line int // Line number, starting at 1.
|
||||
Start int // Start of error, as byte offset starting at 0.
|
||||
Len int // Lenght in bytes.
|
||||
}
|
||||
|
||||
func (pe ParseError) Error() string {
|
||||
msg := pe.Message
|
||||
if msg == "" { // Error from errorf()
|
||||
msg = pe.err.Error()
|
||||
}
|
||||
|
||||
if pe.LastKey == "" {
|
||||
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
|
||||
}
|
||||
return fmt.Sprintf("toml: line %d (last key %q): %s",
|
||||
pe.Position.Line, pe.LastKey, msg)
|
||||
}
|
||||
|
||||
// ErrorWithPosition returns the error with detailed location context.
|
||||
//
|
||||
// See the documentation on [ParseError].
|
||||
func (pe ParseError) ErrorWithPosition() string {
|
||||
if pe.input == "" { // Should never happen, but just in case.
|
||||
return pe.Error()
|
||||
}
|
||||
|
||||
var (
|
||||
lines = strings.Split(pe.input, "\n")
|
||||
col = pe.column(lines)
|
||||
b = new(strings.Builder)
|
||||
)
|
||||
|
||||
msg := pe.Message
|
||||
if msg == "" {
|
||||
msg = pe.err.Error()
|
||||
}
|
||||
|
||||
// TODO: don't show control characters as literals? This may not show up
|
||||
// well everywhere.
|
||||
|
||||
if pe.Position.Len == 1 {
|
||||
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
|
||||
msg, pe.Position.Line, col+1)
|
||||
} else {
|
||||
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
|
||||
msg, pe.Position.Line, col, col+pe.Position.Len)
|
||||
}
|
||||
if pe.Position.Line > 2 {
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, expandTab(lines[pe.Position.Line-3]))
|
||||
}
|
||||
if pe.Position.Line > 1 {
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, expandTab(lines[pe.Position.Line-2]))
|
||||
}
|
||||
|
||||
/// Expand tabs, so that the ^^^s are at the correct position, but leave
|
||||
/// "column 10-13" intact. Adjusting this to the visual column would be
|
||||
/// better, but we don't know the tabsize of the user in their editor, which
|
||||
/// can be 8, 4, 2, or something else. We can't know. So leaving it as the
|
||||
/// character index is probably the "most correct".
|
||||
expanded := expandTab(lines[pe.Position.Line-1])
|
||||
diff := len(expanded) - len(lines[pe.Position.Line-1])
|
||||
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, expanded)
|
||||
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col+diff), strings.Repeat("^", pe.Position.Len))
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// ErrorWithUsage returns the error with detailed location context and usage
|
||||
// guidance.
|
||||
//
|
||||
// See the documentation on [ParseError].
|
||||
func (pe ParseError) ErrorWithUsage() string {
|
||||
m := pe.ErrorWithPosition()
|
||||
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
|
||||
lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
|
||||
for i := range lines {
|
||||
if lines[i] != "" {
|
||||
lines[i] = " " + lines[i]
|
||||
}
|
||||
}
|
||||
return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (pe ParseError) column(lines []string) int {
|
||||
var pos, col int
|
||||
for i := range lines {
|
||||
ll := len(lines[i]) + 1 // +1 for the removed newline
|
||||
if pos+ll >= pe.Position.Start {
|
||||
col = pe.Position.Start - pos
|
||||
if col < 0 { // Should never happen, but just in case.
|
||||
col = 0
|
||||
}
|
||||
break
|
||||
}
|
||||
pos += ll
|
||||
}
|
||||
|
||||
return col
|
||||
}
|
||||
|
||||
func expandTab(s string) string {
|
||||
var (
|
||||
b strings.Builder
|
||||
l int
|
||||
fill = func(n int) string {
|
||||
b := make([]byte, n)
|
||||
for i := range b {
|
||||
b[i] = ' '
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
)
|
||||
b.Grow(len(s))
|
||||
for _, r := range s {
|
||||
switch r {
|
||||
case '\t':
|
||||
tw := 8 - l%8
|
||||
b.WriteString(fill(tw))
|
||||
l += tw
|
||||
default:
|
||||
b.WriteRune(r)
|
||||
l += 1
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
type (
|
||||
errLexControl struct{ r rune }
|
||||
errLexEscape struct{ r rune }
|
||||
errLexUTF8 struct{ b byte }
|
||||
errParseDate struct{ v string }
|
||||
errLexInlineTableNL struct{}
|
||||
errLexStringNL struct{}
|
||||
errParseRange struct {
|
||||
i any // int or float
|
||||
size string // "int64", "uint16", etc.
|
||||
}
|
||||
errUnsafeFloat struct {
|
||||
i interface{} // float32 or float64
|
||||
size string // "float32" or "float64"
|
||||
}
|
||||
errParseDuration struct{ d string }
|
||||
)
|
||||
|
||||
func (e errLexControl) Error() string {
|
||||
return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
|
||||
}
|
||||
func (e errLexControl) Usage() string { return "" }
|
||||
|
||||
func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
|
||||
func (e errLexEscape) Usage() string { return usageEscape }
|
||||
func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
|
||||
func (e errLexUTF8) Usage() string { return "" }
|
||||
func (e errParseDate) Error() string { return fmt.Sprintf("invalid datetime: %q", e.v) }
|
||||
func (e errParseDate) Usage() string { return usageDate }
|
||||
func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
|
||||
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
|
||||
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
|
||||
func (e errLexStringNL) Usage() string { return usageStringNewline }
|
||||
func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
|
||||
func (e errParseRange) Usage() string { return usageIntOverflow }
|
||||
func (e errUnsafeFloat) Error() string {
|
||||
return fmt.Sprintf("%v is out of the safe %s range", e.i, e.size)
|
||||
}
|
||||
func (e errUnsafeFloat) Usage() string { return usageUnsafeFloat }
|
||||
func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
|
||||
func (e errParseDuration) Usage() string { return usageDuration }
|
||||
|
||||
const usageEscape = `
|
||||
A '\' inside a "-delimited string is interpreted as an escape character.
|
||||
|
||||
The following escape sequences are supported:
|
||||
\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
|
||||
|
||||
To prevent a '\' from being recognized as an escape character, use either:
|
||||
|
||||
- a ' or '''-delimited string; escape characters aren't processed in them; or
|
||||
- write two backslashes to get a single backslash: '\\'.
|
||||
|
||||
If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
|
||||
instead of '\' will usually also work: "C:/Users/martin".
|
||||
`
|
||||
|
||||
const usageInlineNewline = `
|
||||
Inline tables must always be on a single line:
|
||||
|
||||
table = {key = 42, second = 43}
|
||||
|
||||
It is invalid to split them over multiple lines like so:
|
||||
|
||||
# INVALID
|
||||
table = {
|
||||
key = 42,
|
||||
second = 43
|
||||
}
|
||||
|
||||
Use regular for this:
|
||||
|
||||
[table]
|
||||
key = 42
|
||||
second = 43
|
||||
`
|
||||
|
||||
const usageStringNewline = `
|
||||
Strings must always be on a single line, and cannot span more than one line:
|
||||
|
||||
# INVALID
|
||||
string = "Hello,
|
||||
world!"
|
||||
|
||||
Instead use """ or ''' to split strings over multiple lines:
|
||||
|
||||
string = """Hello,
|
||||
world!"""
|
||||
`
|
||||
|
||||
const usageIntOverflow = `
|
||||
This number is too large; this may be an error in the TOML, but it can also be a
|
||||
bug in the program that uses too small of an integer.
|
||||
|
||||
The maximum and minimum values are:
|
||||
|
||||
size │ lowest │ highest
|
||||
───────┼────────────────┼──────────────
|
||||
int8 │ -128 │ 127
|
||||
int16 │ -32,768 │ 32,767
|
||||
int32 │ -2,147,483,648 │ 2,147,483,647
|
||||
int64 │ -9.2 × 10¹⁷ │ 9.2 × 10¹⁷
|
||||
uint8 │ 0 │ 255
|
||||
uint16 │ 0 │ 65,535
|
||||
uint32 │ 0 │ 4,294,967,295
|
||||
uint64 │ 0 │ 1.8 × 10¹⁸
|
||||
|
||||
int refers to int32 on 32-bit systems and int64 on 64-bit systems.
|
||||
`
|
||||
|
||||
const usageUnsafeFloat = `
|
||||
This number is outside of the "safe" range for floating point numbers; whole
|
||||
(non-fractional) numbers outside the below range can not always be represented
|
||||
accurately in a float, leading to some loss of accuracy.
|
||||
|
||||
Explicitly mark a number as a fractional unit by adding ".0", which will incur
|
||||
some loss of accuracy; for example:
|
||||
|
||||
f = 2_000_000_000.0
|
||||
|
||||
Accuracy ranges:
|
||||
|
||||
float32 = 16,777,215
|
||||
float64 = 9,007,199,254,740,991
|
||||
`
|
||||
|
||||
const usageDuration = `
|
||||
A duration must be as "number<unit>", without any spaces. Valid units are:
|
||||
|
||||
ns nanoseconds (billionth of a second)
|
||||
us, µs microseconds (millionth of a second)
|
||||
ms milliseconds (thousands of a second)
|
||||
s seconds
|
||||
m minutes
|
||||
h hours
|
||||
|
||||
You can combine multiple units; for example "5m10s" for 5 minutes and 10
|
||||
seconds.
|
||||
`
|
||||
|
||||
const usageDate = `
|
||||
A TOML datetime must be in one of the following formats:
|
||||
|
||||
2006-01-02T15:04:05Z07:00 Date and time, with timezone.
|
||||
2006-01-02T15:04:05 Date and time, but without timezone.
|
||||
2006-01-02 Date without a time or timezone.
|
||||
15:04:05 Just a time, without any timezone.
|
||||
|
||||
Seconds may optionally have a fraction, up to nanosecond precision:
|
||||
|
||||
15:04:05.123
|
||||
15:04:05.856018510
|
||||
`
|
||||
|
||||
// TOML 1.1:
|
||||
// The seconds part in times is optional, and may be omitted:
|
||||
// 2006-01-02T15:04Z07:00
|
||||
// 2006-01-02T15:04
|
||||
// 15:04
|
||||
36
vendor/github.com/BurntSushi/toml/internal/tz.go
generated
vendored
Normal file
36
vendor/github.com/BurntSushi/toml/internal/tz.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
package internal
|
||||
|
||||
import "time"
|
||||
|
||||
// Timezones used for local datetime, date, and time TOML types.
|
||||
//
|
||||
// The exact way times and dates without a timezone should be interpreted is not
|
||||
// well-defined in the TOML specification and left to the implementation. These
|
||||
// defaults to current local timezone offset of the computer, but this can be
|
||||
// changed by changing these variables before decoding.
|
||||
//
|
||||
// TODO:
|
||||
// Ideally we'd like to offer people the ability to configure the used timezone
|
||||
// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
|
||||
// tricky: the reason we use three different variables for this is to support
|
||||
// round-tripping – without these specific TZ names we wouldn't know which
|
||||
// format to use.
|
||||
//
|
||||
// There isn't a good way to encode this right now though, and passing this sort
|
||||
// of information also ties in to various related issues such as string format
|
||||
// encoding, encoding of comments, etc.
|
||||
//
|
||||
// So, for the time being, just put this in internal until we can write a good
|
||||
// comprehensive API for doing all of this.
|
||||
//
|
||||
// The reason they're exported is because they're referred from in e.g.
|
||||
// internal/tag.
|
||||
//
|
||||
// Note that this behaviour is valid according to the TOML spec as the exact
|
||||
// behaviour is left up to implementations.
|
||||
var (
|
||||
localOffset = func() int { _, o := time.Now().Zone(); return o }()
|
||||
LocalDatetime = time.FixedZone("datetime-local", localOffset)
|
||||
LocalDate = time.FixedZone("date-local", localOffset)
|
||||
LocalTime = time.FixedZone("time-local", localOffset)
|
||||
)
|
||||
1281
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
1281
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
148
vendor/github.com/BurntSushi/toml/meta.go
generated
vendored
Normal file
148
vendor/github.com/BurntSushi/toml/meta.go
generated
vendored
Normal file
@@ -0,0 +1,148 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// MetaData allows access to meta information about TOML data that's not
|
||||
// accessible otherwise.
|
||||
//
|
||||
// It allows checking if a key is defined in the TOML data, whether any keys
|
||||
// were undecoded, and the TOML type of a key.
|
||||
type MetaData struct {
|
||||
context Key // Used only during decoding.
|
||||
|
||||
keyInfo map[string]keyInfo
|
||||
mapping map[string]any
|
||||
keys []Key
|
||||
decoded map[string]struct{}
|
||||
data []byte // Input file; for errors.
|
||||
}
|
||||
|
||||
// IsDefined reports if the key exists in the TOML data.
|
||||
//
|
||||
// The key should be specified hierarchically, for example to access the TOML
|
||||
// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
|
||||
//
|
||||
// Returns false for an empty key.
|
||||
func (md *MetaData) IsDefined(key ...string) bool {
|
||||
if len(key) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var (
|
||||
hash map[string]any
|
||||
ok bool
|
||||
hashOrVal any = md.mapping
|
||||
)
|
||||
for _, k := range key {
|
||||
if hash, ok = hashOrVal.(map[string]any); !ok {
|
||||
return false
|
||||
}
|
||||
if hashOrVal, ok = hash[k]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Type returns a string representation of the type of the key specified.
|
||||
//
|
||||
// Type will return the empty string if given an empty key or a key that does
|
||||
// not exist. Keys are case sensitive.
|
||||
func (md *MetaData) Type(key ...string) string {
|
||||
if ki, ok := md.keyInfo[Key(key).String()]; ok {
|
||||
return ki.tomlType.typeString()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Keys returns a slice of every key in the TOML data, including key groups.
|
||||
//
|
||||
// Each key is itself a slice, where the first element is the top of the
|
||||
// hierarchy and the last is the most specific. The list will have the same
|
||||
// order as the keys appeared in the TOML data.
|
||||
//
|
||||
// All keys returned are non-empty.
|
||||
func (md *MetaData) Keys() []Key {
|
||||
return md.keys
|
||||
}
|
||||
|
||||
// Undecoded returns all keys that have not been decoded in the order in which
|
||||
// they appear in the original TOML document.
|
||||
//
|
||||
// This includes keys that haven't been decoded because of a [Primitive] value.
|
||||
// Once the Primitive value is decoded, the keys will be considered decoded.
|
||||
//
|
||||
// Also note that decoding into an empty interface will result in no decoding,
|
||||
// and so no keys will be considered decoded.
|
||||
//
|
||||
// In this sense, the Undecoded keys correspond to keys in the TOML document
|
||||
// that do not have a concrete type in your representation.
|
||||
func (md *MetaData) Undecoded() []Key {
|
||||
undecoded := make([]Key, 0, len(md.keys))
|
||||
for _, key := range md.keys {
|
||||
if _, ok := md.decoded[key.String()]; !ok {
|
||||
undecoded = append(undecoded, key)
|
||||
}
|
||||
}
|
||||
return undecoded
|
||||
}
|
||||
|
||||
// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
|
||||
// values of this type.
|
||||
type Key []string
|
||||
|
||||
func (k Key) String() string {
|
||||
// This is called quite often, so it's a bit funky to make it faster.
|
||||
var b strings.Builder
|
||||
b.Grow(len(k) * 25)
|
||||
outer:
|
||||
for i, kk := range k {
|
||||
if i > 0 {
|
||||
b.WriteByte('.')
|
||||
}
|
||||
if kk == "" {
|
||||
b.WriteString(`""`)
|
||||
} else {
|
||||
for _, r := range kk {
|
||||
// "Inline" isBareKeyChar
|
||||
if !((r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') || (r >= '0' && r <= '9') || r == '_' || r == '-') {
|
||||
b.WriteByte('"')
|
||||
b.WriteString(dblQuotedReplacer.Replace(kk))
|
||||
b.WriteByte('"')
|
||||
continue outer
|
||||
}
|
||||
}
|
||||
b.WriteString(kk)
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (k Key) maybeQuoted(i int) string {
|
||||
if k[i] == "" {
|
||||
return `""`
|
||||
}
|
||||
for _, r := range k[i] {
|
||||
if (r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') || (r >= '0' && r <= '9') || r == '_' || r == '-' {
|
||||
continue
|
||||
}
|
||||
return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
|
||||
}
|
||||
return k[i]
|
||||
}
|
||||
|
||||
// Like append(), but only increase the cap by 1.
|
||||
func (k Key) add(piece string) Key {
|
||||
if cap(k) > len(k) {
|
||||
return append(k, piece)
|
||||
}
|
||||
newKey := make(Key, len(k)+1)
|
||||
copy(newKey, k)
|
||||
newKey[len(k)] = piece
|
||||
return newKey
|
||||
}
|
||||
|
||||
func (k Key) parent() Key { return k[:len(k)-1] } // all except the last piece.
|
||||
func (k Key) last() string { return k[len(k)-1] } // last piece of this key.
|
||||
844
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
844
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
@@ -0,0 +1,844 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/BurntSushi/toml/internal"
|
||||
)
|
||||
|
||||
type parser struct {
|
||||
lx *lexer
|
||||
context Key // Full key for the current hash in scope.
|
||||
currentKey string // Base key name for everything except hashes.
|
||||
pos Position // Current position in the TOML file.
|
||||
tomlNext bool
|
||||
|
||||
ordered []Key // List of keys in the order that they appear in the TOML data.
|
||||
|
||||
keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
|
||||
mapping map[string]any // Map keyname → key value.
|
||||
implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
|
||||
}
|
||||
|
||||
type keyInfo struct {
|
||||
pos Position
|
||||
tomlType tomlType
|
||||
}
|
||||
|
||||
func parse(data string) (p *parser, err error) {
|
||||
_, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if pErr, ok := r.(ParseError); ok {
|
||||
pErr.input = data
|
||||
err = pErr
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
|
||||
// Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
|
||||
// which mangles stuff. UTF-16 BOM isn't strictly valid, but some tools add
|
||||
// it anyway.
|
||||
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
|
||||
data = data[2:]
|
||||
//lint:ignore S1017 https://github.com/dominikh/go-tools/issues/1447
|
||||
} else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
|
||||
data = data[3:]
|
||||
}
|
||||
|
||||
// Examine first few bytes for NULL bytes; this probably means it's a UTF-16
|
||||
// file (second byte in surrogate pair being NULL). Again, do this here to
|
||||
// avoid having to deal with UTF-8/16 stuff in the lexer.
|
||||
ex := 6
|
||||
if len(data) < 6 {
|
||||
ex = len(data)
|
||||
}
|
||||
if i := strings.IndexRune(data[:ex], 0); i > -1 {
|
||||
return nil, ParseError{
|
||||
Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
|
||||
Position: Position{Line: 1, Start: i, Len: 1},
|
||||
Line: 1,
|
||||
input: data,
|
||||
}
|
||||
}
|
||||
|
||||
p = &parser{
|
||||
keyInfo: make(map[string]keyInfo),
|
||||
mapping: make(map[string]any),
|
||||
lx: lex(data, tomlNext),
|
||||
ordered: make([]Key, 0),
|
||||
implicits: make(map[string]struct{}),
|
||||
tomlNext: tomlNext,
|
||||
}
|
||||
for {
|
||||
item := p.next()
|
||||
if item.typ == itemEOF {
|
||||
break
|
||||
}
|
||||
p.topLevel(item)
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *parser) panicErr(it item, err error) {
|
||||
panic(ParseError{
|
||||
err: err,
|
||||
Position: it.pos,
|
||||
Line: it.pos.Len,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) panicItemf(it item, format string, v ...any) {
|
||||
panic(ParseError{
|
||||
Message: fmt.Sprintf(format, v...),
|
||||
Position: it.pos,
|
||||
Line: it.pos.Len,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) panicf(format string, v ...any) {
|
||||
panic(ParseError{
|
||||
Message: fmt.Sprintf(format, v...),
|
||||
Position: p.pos,
|
||||
Line: p.pos.Line,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) next() item {
|
||||
it := p.lx.nextItem()
|
||||
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
|
||||
if it.typ == itemError {
|
||||
if it.err != nil {
|
||||
panic(ParseError{
|
||||
Position: it.pos,
|
||||
Line: it.pos.Line,
|
||||
LastKey: p.current(),
|
||||
err: it.err,
|
||||
})
|
||||
}
|
||||
|
||||
p.panicItemf(it, "%s", it.val)
|
||||
}
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) nextPos() item {
|
||||
it := p.next()
|
||||
p.pos = it.pos
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) bug(format string, v ...any) {
|
||||
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
|
||||
}
|
||||
|
||||
func (p *parser) expect(typ itemType) item {
|
||||
it := p.next()
|
||||
p.assertEqual(typ, it.typ)
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) assertEqual(expected, got itemType) {
|
||||
if expected != got {
|
||||
p.bug("Expected '%s' but got '%s'.", expected, got)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) topLevel(item item) {
|
||||
switch item.typ {
|
||||
case itemCommentStart: // # ..
|
||||
p.expect(itemText)
|
||||
case itemTableStart: // [ .. ]
|
||||
name := p.nextPos()
|
||||
|
||||
var key Key
|
||||
for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
|
||||
key = append(key, p.keyString(name))
|
||||
}
|
||||
p.assertEqual(itemTableEnd, name.typ)
|
||||
|
||||
p.addContext(key, false)
|
||||
p.setType("", tomlHash, item.pos)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemArrayTableStart: // [[ .. ]]
|
||||
name := p.nextPos()
|
||||
|
||||
var key Key
|
||||
for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
|
||||
key = append(key, p.keyString(name))
|
||||
}
|
||||
p.assertEqual(itemArrayTableEnd, name.typ)
|
||||
|
||||
p.addContext(key, true)
|
||||
p.setType("", tomlArrayHash, item.pos)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemKeyStart: // key = ..
|
||||
outerContext := p.context
|
||||
/// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
|
||||
k := p.nextPos()
|
||||
var key Key
|
||||
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
|
||||
key = append(key, p.keyString(k))
|
||||
}
|
||||
p.assertEqual(itemKeyEnd, k.typ)
|
||||
|
||||
/// The current key is the last part.
|
||||
p.currentKey = key.last()
|
||||
|
||||
/// All the other parts (if any) are the context; need to set each part
|
||||
/// as implicit.
|
||||
context := key.parent()
|
||||
for i := range context {
|
||||
p.addImplicitContext(append(p.context, context[i:i+1]...))
|
||||
}
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
|
||||
/// Set value.
|
||||
vItem := p.next()
|
||||
val, typ := p.value(vItem, false)
|
||||
p.setValue(p.currentKey, val)
|
||||
p.setType(p.currentKey, typ, vItem.pos)
|
||||
|
||||
/// Remove the context we added (preserving any context from [tbl] lines).
|
||||
p.context = outerContext
|
||||
p.currentKey = ""
|
||||
default:
|
||||
p.bug("Unexpected type at top level: %s", item.typ)
|
||||
}
|
||||
}
|
||||
|
||||
// Gets a string for a key (or part of a key in a table name).
|
||||
func (p *parser) keyString(it item) string {
|
||||
switch it.typ {
|
||||
case itemText:
|
||||
return it.val
|
||||
case itemString, itemStringEsc, itemMultilineString,
|
||||
itemRawString, itemRawMultilineString:
|
||||
s, _ := p.value(it, false)
|
||||
return s.(string)
|
||||
default:
|
||||
p.bug("Unexpected key type: %s", it.typ)
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
var datetimeRepl = strings.NewReplacer(
|
||||
"z", "Z",
|
||||
"t", "T",
|
||||
" ", "T")
|
||||
|
||||
// value translates an expected value from the lexer into a Go value wrapped
|
||||
// as an empty interface.
|
||||
func (p *parser) value(it item, parentIsArray bool) (any, tomlType) {
|
||||
switch it.typ {
|
||||
case itemString:
|
||||
return it.val, p.typeOfPrimitive(it)
|
||||
case itemStringEsc:
|
||||
return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
|
||||
case itemMultilineString:
|
||||
return p.replaceEscapes(it, p.stripEscapedNewlines(stripFirstNewline(it.val))), p.typeOfPrimitive(it)
|
||||
case itemRawString:
|
||||
return it.val, p.typeOfPrimitive(it)
|
||||
case itemRawMultilineString:
|
||||
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
|
||||
case itemInteger:
|
||||
return p.valueInteger(it)
|
||||
case itemFloat:
|
||||
return p.valueFloat(it)
|
||||
case itemBool:
|
||||
switch it.val {
|
||||
case "true":
|
||||
return true, p.typeOfPrimitive(it)
|
||||
case "false":
|
||||
return false, p.typeOfPrimitive(it)
|
||||
default:
|
||||
p.bug("Expected boolean value, but got '%s'.", it.val)
|
||||
}
|
||||
case itemDatetime:
|
||||
return p.valueDatetime(it)
|
||||
case itemArray:
|
||||
return p.valueArray(it)
|
||||
case itemInlineTableStart:
|
||||
return p.valueInlineTable(it, parentIsArray)
|
||||
default:
|
||||
p.bug("Unexpected value type: %s", it.typ)
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
func (p *parser) valueInteger(it item) (any, tomlType) {
|
||||
if !numUnderscoresOK(it.val) {
|
||||
p.panicItemf(it, "Invalid integer %q: underscores must be surrounded by digits", it.val)
|
||||
}
|
||||
if numHasLeadingZero(it.val) {
|
||||
p.panicItemf(it, "Invalid integer %q: cannot have leading zeroes", it.val)
|
||||
}
|
||||
|
||||
num, err := strconv.ParseInt(it.val, 0, 64)
|
||||
if err != nil {
|
||||
// Distinguish integer values. Normally, it'd be a bug if the lexer
|
||||
// provides an invalid integer, but it's possible that the number is
|
||||
// out of range of valid values (which the lexer cannot determine).
|
||||
// So mark the former as a bug but the latter as a legitimate user
|
||||
// error.
|
||||
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
|
||||
p.panicErr(it, errParseRange{i: it.val, size: "int64"})
|
||||
} else {
|
||||
p.bug("Expected integer value, but got '%s'.", it.val)
|
||||
}
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
func (p *parser) valueFloat(it item) (any, tomlType) {
|
||||
parts := strings.FieldsFunc(it.val, func(r rune) bool {
|
||||
switch r {
|
||||
case '.', 'e', 'E':
|
||||
return true
|
||||
}
|
||||
return false
|
||||
})
|
||||
for _, part := range parts {
|
||||
if !numUnderscoresOK(part) {
|
||||
p.panicItemf(it, "Invalid float %q: underscores must be surrounded by digits", it.val)
|
||||
}
|
||||
}
|
||||
if len(parts) > 0 && numHasLeadingZero(parts[0]) {
|
||||
p.panicItemf(it, "Invalid float %q: cannot have leading zeroes", it.val)
|
||||
}
|
||||
if !numPeriodsOK(it.val) {
|
||||
// As a special case, numbers like '123.' or '1.e2',
|
||||
// which are valid as far as Go/strconv are concerned,
|
||||
// must be rejected because TOML says that a fractional
|
||||
// part consists of '.' followed by 1+ digits.
|
||||
p.panicItemf(it, "Invalid float %q: '.' must be followed by one or more digits", it.val)
|
||||
}
|
||||
val := strings.Replace(it.val, "_", "", -1)
|
||||
signbit := false
|
||||
if val == "+nan" || val == "-nan" {
|
||||
signbit = val == "-nan"
|
||||
val = "nan"
|
||||
}
|
||||
num, err := strconv.ParseFloat(val, 64)
|
||||
if err != nil {
|
||||
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
|
||||
p.panicErr(it, errParseRange{i: it.val, size: "float64"})
|
||||
} else {
|
||||
p.panicItemf(it, "Invalid float value: %q", it.val)
|
||||
}
|
||||
}
|
||||
if signbit {
|
||||
num = math.Copysign(num, -1)
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
var dtTypes = []struct {
|
||||
fmt string
|
||||
zone *time.Location
|
||||
next bool
|
||||
}{
|
||||
{time.RFC3339Nano, time.Local, false},
|
||||
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
|
||||
{"2006-01-02", internal.LocalDate, false},
|
||||
{"15:04:05.999999999", internal.LocalTime, false},
|
||||
|
||||
// tomlNext
|
||||
{"2006-01-02T15:04Z07:00", time.Local, true},
|
||||
{"2006-01-02T15:04", internal.LocalDatetime, true},
|
||||
{"15:04", internal.LocalTime, true},
|
||||
}
|
||||
|
||||
func (p *parser) valueDatetime(it item) (any, tomlType) {
|
||||
it.val = datetimeRepl.Replace(it.val)
|
||||
var (
|
||||
t time.Time
|
||||
ok bool
|
||||
err error
|
||||
)
|
||||
for _, dt := range dtTypes {
|
||||
if dt.next && !p.tomlNext {
|
||||
continue
|
||||
}
|
||||
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
|
||||
if err == nil {
|
||||
if missingLeadingZero(it.val, dt.fmt) {
|
||||
p.panicErr(it, errParseDate{it.val})
|
||||
}
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
p.panicErr(it, errParseDate{it.val})
|
||||
}
|
||||
return t, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
// Go's time.Parse() will accept numbers without a leading zero; there isn't any
|
||||
// way to require it. https://github.com/golang/go/issues/29911
|
||||
//
|
||||
// Depend on the fact that the separators (- and :) should always be at the same
|
||||
// location.
|
||||
func missingLeadingZero(d, l string) bool {
|
||||
for i, c := range []byte(l) {
|
||||
if c == '.' || c == 'Z' {
|
||||
return false
|
||||
}
|
||||
if (c < '0' || c > '9') && d[i] != c {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (p *parser) valueArray(it item) (any, tomlType) {
|
||||
p.setType(p.currentKey, tomlArray, it.pos)
|
||||
|
||||
var (
|
||||
// Initialize to a non-nil slice to make it consistent with how S = []
|
||||
// decodes into a non-nil slice inside something like struct { S
|
||||
// []string }. See #338
|
||||
array = make([]any, 0, 2)
|
||||
)
|
||||
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
val, typ := p.value(it, true)
|
||||
array = append(array, val)
|
||||
|
||||
// XXX: type isn't used here, we need it to record the accurate type
|
||||
// information.
|
||||
//
|
||||
// Not entirely sure how to best store this; could use "key[0]",
|
||||
// "key[1]" notation, or maybe store it on the Array type?
|
||||
_ = typ
|
||||
}
|
||||
return array, tomlArray
|
||||
}
|
||||
|
||||
func (p *parser) valueInlineTable(it item, parentIsArray bool) (any, tomlType) {
|
||||
var (
|
||||
topHash = make(map[string]any)
|
||||
outerContext = p.context
|
||||
outerKey = p.currentKey
|
||||
)
|
||||
|
||||
p.context = append(p.context, p.currentKey)
|
||||
prevContext := p.context
|
||||
p.currentKey = ""
|
||||
|
||||
p.addImplicit(p.context)
|
||||
p.addContext(p.context, parentIsArray)
|
||||
|
||||
/// Loop over all table key/value pairs.
|
||||
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
/// Read all key parts.
|
||||
k := p.nextPos()
|
||||
var key Key
|
||||
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
|
||||
key = append(key, p.keyString(k))
|
||||
}
|
||||
p.assertEqual(itemKeyEnd, k.typ)
|
||||
|
||||
/// The current key is the last part.
|
||||
p.currentKey = key.last()
|
||||
|
||||
/// All the other parts (if any) are the context; need to set each part
|
||||
/// as implicit.
|
||||
context := key.parent()
|
||||
for i := range context {
|
||||
p.addImplicitContext(append(p.context, context[i:i+1]...))
|
||||
}
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
|
||||
/// Set the value.
|
||||
val, typ := p.value(p.next(), false)
|
||||
p.setValue(p.currentKey, val)
|
||||
p.setType(p.currentKey, typ, it.pos)
|
||||
|
||||
hash := topHash
|
||||
for _, c := range context {
|
||||
h, ok := hash[c]
|
||||
if !ok {
|
||||
h = make(map[string]any)
|
||||
hash[c] = h
|
||||
}
|
||||
hash, ok = h.(map[string]any)
|
||||
if !ok {
|
||||
p.panicf("%q is not a table", p.context)
|
||||
}
|
||||
}
|
||||
hash[p.currentKey] = val
|
||||
|
||||
/// Restore context.
|
||||
p.context = prevContext
|
||||
}
|
||||
p.context = outerContext
|
||||
p.currentKey = outerKey
|
||||
return topHash, tomlHash
|
||||
}
|
||||
|
||||
// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
|
||||
// +/- signs, and base prefixes.
|
||||
func numHasLeadingZero(s string) bool {
|
||||
if len(s) > 1 && s[0] == '0' && !(s[1] == 'b' || s[1] == 'o' || s[1] == 'x') { // Allow 0b, 0o, 0x
|
||||
return true
|
||||
}
|
||||
if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// numUnderscoresOK checks whether each underscore in s is surrounded by
|
||||
// characters that are not underscores.
|
||||
func numUnderscoresOK(s string) bool {
|
||||
switch s {
|
||||
case "nan", "+nan", "-nan", "inf", "-inf", "+inf":
|
||||
return true
|
||||
}
|
||||
accept := false
|
||||
for _, r := range s {
|
||||
if r == '_' {
|
||||
if !accept {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// isHexis a superset of all the permissable characters surrounding an
|
||||
// underscore.
|
||||
accept = isHex(r)
|
||||
}
|
||||
return accept
|
||||
}
|
||||
|
||||
// numPeriodsOK checks whether every period in s is followed by a digit.
|
||||
func numPeriodsOK(s string) bool {
|
||||
period := false
|
||||
for _, r := range s {
|
||||
if period && !isDigit(r) {
|
||||
return false
|
||||
}
|
||||
period = r == '.'
|
||||
}
|
||||
return !period
|
||||
}
|
||||
|
||||
// Set the current context of the parser, where the context is either a hash or
|
||||
// an array of hashes, depending on the value of the `array` parameter.
|
||||
//
|
||||
// Establishing the context also makes sure that the key isn't a duplicate, and
|
||||
// will create implicit hashes automatically.
|
||||
func (p *parser) addContext(key Key, array bool) {
|
||||
/// Always start at the top level and drill down for our context.
|
||||
hashContext := p.mapping
|
||||
keyContext := make(Key, 0, len(key)-1)
|
||||
|
||||
/// We only need implicit hashes for the parents.
|
||||
for _, k := range key.parent() {
|
||||
_, ok := hashContext[k]
|
||||
keyContext = append(keyContext, k)
|
||||
|
||||
// No key? Make an implicit hash and move on.
|
||||
if !ok {
|
||||
p.addImplicit(keyContext)
|
||||
hashContext[k] = make(map[string]any)
|
||||
}
|
||||
|
||||
// If the hash context is actually an array of tables, then set
|
||||
// the hash context to the last element in that array.
|
||||
//
|
||||
// Otherwise, it better be a table, since this MUST be a key group (by
|
||||
// virtue of it not being the last element in a key).
|
||||
switch t := hashContext[k].(type) {
|
||||
case []map[string]any:
|
||||
hashContext = t[len(t)-1]
|
||||
case map[string]any:
|
||||
hashContext = t
|
||||
default:
|
||||
p.panicf("Key '%s' was already created as a hash.", keyContext)
|
||||
}
|
||||
}
|
||||
|
||||
p.context = keyContext
|
||||
if array {
|
||||
// If this is the first element for this array, then allocate a new
|
||||
// list of tables for it.
|
||||
k := key.last()
|
||||
if _, ok := hashContext[k]; !ok {
|
||||
hashContext[k] = make([]map[string]any, 0, 4)
|
||||
}
|
||||
|
||||
// Add a new table. But make sure the key hasn't already been used
|
||||
// for something else.
|
||||
if hash, ok := hashContext[k].([]map[string]any); ok {
|
||||
hashContext[k] = append(hash, make(map[string]any))
|
||||
} else {
|
||||
p.panicf("Key '%s' was already created and cannot be used as an array.", key)
|
||||
}
|
||||
} else {
|
||||
p.setValue(key.last(), make(map[string]any))
|
||||
}
|
||||
p.context = append(p.context, key.last())
|
||||
}
|
||||
|
||||
// setValue sets the given key to the given value in the current context.
|
||||
// It will make sure that the key hasn't already been defined, account for
|
||||
// implicit key groups.
|
||||
func (p *parser) setValue(key string, value any) {
|
||||
var (
|
||||
tmpHash any
|
||||
ok bool
|
||||
hash = p.mapping
|
||||
keyContext = make(Key, 0, len(p.context)+1)
|
||||
)
|
||||
for _, k := range p.context {
|
||||
keyContext = append(keyContext, k)
|
||||
if tmpHash, ok = hash[k]; !ok {
|
||||
p.bug("Context for key '%s' has not been established.", keyContext)
|
||||
}
|
||||
switch t := tmpHash.(type) {
|
||||
case []map[string]any:
|
||||
// The context is a table of hashes. Pick the most recent table
|
||||
// defined as the current hash.
|
||||
hash = t[len(t)-1]
|
||||
case map[string]any:
|
||||
hash = t
|
||||
default:
|
||||
p.panicf("Key '%s' has already been defined.", keyContext)
|
||||
}
|
||||
}
|
||||
keyContext = append(keyContext, key)
|
||||
|
||||
if _, ok := hash[key]; ok {
|
||||
// Normally redefining keys isn't allowed, but the key could have been
|
||||
// defined implicitly and it's allowed to be redefined concretely. (See
|
||||
// the `valid/implicit-and-explicit-after.toml` in toml-test)
|
||||
//
|
||||
// But we have to make sure to stop marking it as an implicit. (So that
|
||||
// another redefinition provokes an error.)
|
||||
//
|
||||
// Note that since it has already been defined (as a hash), we don't
|
||||
// want to overwrite it. So our business is done.
|
||||
if p.isArray(keyContext) {
|
||||
p.removeImplicit(keyContext)
|
||||
hash[key] = value
|
||||
return
|
||||
}
|
||||
if p.isImplicit(keyContext) {
|
||||
p.removeImplicit(keyContext)
|
||||
return
|
||||
}
|
||||
// Otherwise, we have a concrete key trying to override a previous key,
|
||||
// which is *always* wrong.
|
||||
p.panicf("Key '%s' has already been defined.", keyContext)
|
||||
}
|
||||
|
||||
hash[key] = value
|
||||
}
|
||||
|
||||
// setType sets the type of a particular value at a given key. It should be
|
||||
// called immediately AFTER setValue.
|
||||
//
|
||||
// Note that if `key` is empty, then the type given will be applied to the
|
||||
// current context (which is either a table or an array of tables).
|
||||
func (p *parser) setType(key string, typ tomlType, pos Position) {
|
||||
keyContext := make(Key, 0, len(p.context)+1)
|
||||
keyContext = append(keyContext, p.context...)
|
||||
if len(key) > 0 { // allow type setting for hashes
|
||||
keyContext = append(keyContext, key)
|
||||
}
|
||||
// Special case to make empty keys ("" = 1) work.
|
||||
// Without it it will set "" rather than `""`.
|
||||
// TODO: why is this needed? And why is this only needed here?
|
||||
if len(keyContext) == 0 {
|
||||
keyContext = Key{""}
|
||||
}
|
||||
p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
|
||||
}
|
||||
|
||||
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
|
||||
// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
|
||||
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
|
||||
func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
|
||||
func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
|
||||
func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
|
||||
func (p *parser) addImplicitContext(key Key) { p.addImplicit(key); p.addContext(key, false) }
|
||||
|
||||
// current returns the full key name of the current context.
|
||||
func (p *parser) current() string {
|
||||
if len(p.currentKey) == 0 {
|
||||
return p.context.String()
|
||||
}
|
||||
if len(p.context) == 0 {
|
||||
return p.currentKey
|
||||
}
|
||||
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
|
||||
}
|
||||
|
||||
func stripFirstNewline(s string) string {
|
||||
if len(s) > 0 && s[0] == '\n' {
|
||||
return s[1:]
|
||||
}
|
||||
if len(s) > 1 && s[0] == '\r' && s[1] == '\n' {
|
||||
return s[2:]
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// stripEscapedNewlines removes whitespace after line-ending backslashes in
|
||||
// multiline strings.
|
||||
//
|
||||
// A line-ending backslash is an unescaped \ followed only by whitespace until
|
||||
// the next newline. After a line-ending backslash, all whitespace is removed
|
||||
// until the next non-whitespace character.
|
||||
func (p *parser) stripEscapedNewlines(s string) string {
|
||||
var (
|
||||
b strings.Builder
|
||||
i int
|
||||
)
|
||||
b.Grow(len(s))
|
||||
for {
|
||||
ix := strings.Index(s[i:], `\`)
|
||||
if ix < 0 {
|
||||
b.WriteString(s)
|
||||
return b.String()
|
||||
}
|
||||
i += ix
|
||||
|
||||
if len(s) > i+1 && s[i+1] == '\\' {
|
||||
// Escaped backslash.
|
||||
i += 2
|
||||
continue
|
||||
}
|
||||
// Scan until the next non-whitespace.
|
||||
j := i + 1
|
||||
whitespaceLoop:
|
||||
for ; j < len(s); j++ {
|
||||
switch s[j] {
|
||||
case ' ', '\t', '\r', '\n':
|
||||
default:
|
||||
break whitespaceLoop
|
||||
}
|
||||
}
|
||||
if j == i+1 {
|
||||
// Not a whitespace escape.
|
||||
i++
|
||||
continue
|
||||
}
|
||||
if !strings.Contains(s[i:j], "\n") {
|
||||
// This is not a line-ending backslash. (It's a bad escape sequence,
|
||||
// but we can let replaceEscapes catch it.)
|
||||
i++
|
||||
continue
|
||||
}
|
||||
b.WriteString(s[:i])
|
||||
s = s[j:]
|
||||
i = 0
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) replaceEscapes(it item, str string) string {
|
||||
var (
|
||||
b strings.Builder
|
||||
skip = 0
|
||||
)
|
||||
b.Grow(len(str))
|
||||
for i, c := range str {
|
||||
if skip > 0 {
|
||||
skip--
|
||||
continue
|
||||
}
|
||||
if c != '\\' {
|
||||
b.WriteRune(c)
|
||||
continue
|
||||
}
|
||||
|
||||
if i >= len(str) {
|
||||
p.bug("Escape sequence at end of string.")
|
||||
return ""
|
||||
}
|
||||
switch str[i+1] {
|
||||
default:
|
||||
p.bug("Expected valid escape code after \\, but got %q.", str[i+1])
|
||||
case ' ', '\t':
|
||||
p.panicItemf(it, "invalid escape: '\\%c'", str[i+1])
|
||||
case 'b':
|
||||
b.WriteByte(0x08)
|
||||
skip = 1
|
||||
case 't':
|
||||
b.WriteByte(0x09)
|
||||
skip = 1
|
||||
case 'n':
|
||||
b.WriteByte(0x0a)
|
||||
skip = 1
|
||||
case 'f':
|
||||
b.WriteByte(0x0c)
|
||||
skip = 1
|
||||
case 'r':
|
||||
b.WriteByte(0x0d)
|
||||
skip = 1
|
||||
case 'e':
|
||||
if p.tomlNext {
|
||||
b.WriteByte(0x1b)
|
||||
skip = 1
|
||||
}
|
||||
case '"':
|
||||
b.WriteByte(0x22)
|
||||
skip = 1
|
||||
case '\\':
|
||||
b.WriteByte(0x5c)
|
||||
skip = 1
|
||||
// The lexer guarantees the correct number of characters are present;
|
||||
// don't need to check here.
|
||||
case 'x':
|
||||
if p.tomlNext {
|
||||
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+4])
|
||||
b.WriteRune(escaped)
|
||||
skip = 3
|
||||
}
|
||||
case 'u':
|
||||
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+6])
|
||||
b.WriteRune(escaped)
|
||||
skip = 5
|
||||
case 'U':
|
||||
escaped := p.asciiEscapeToUnicode(it, str[i+2:i+10])
|
||||
b.WriteRune(escaped)
|
||||
skip = 9
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (p *parser) asciiEscapeToUnicode(it item, s string) rune {
|
||||
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
|
||||
if err != nil {
|
||||
p.bug("Could not parse '%s' as a hexadecimal number, but the lexer claims it's OK: %s", s, err)
|
||||
}
|
||||
if !utf8.ValidRune(rune(hex)) {
|
||||
p.panicItemf(it, "Escaped character '\\u%s' is not valid UTF-8.", s)
|
||||
}
|
||||
return rune(hex)
|
||||
}
|
||||
238
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
238
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
@@ -0,0 +1,238 @@
|
||||
package toml
|
||||
|
||||
// Struct field handling is adapted from code in encoding/json:
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the Go distribution.
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// A field represents a single field found in a struct.
|
||||
type field struct {
|
||||
name string // the name of the field (`toml` tag included)
|
||||
tag bool // whether field has a `toml` tag
|
||||
index []int // represents the depth of an anonymous field
|
||||
typ reflect.Type // the type of the field
|
||||
}
|
||||
|
||||
// byName sorts field by name, breaking ties with depth,
|
||||
// then breaking ties with "name came from toml tag", then
|
||||
// breaking ties with index sequence.
|
||||
type byName []field
|
||||
|
||||
func (x byName) Len() int { return len(x) }
|
||||
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
func (x byName) Less(i, j int) bool {
|
||||
if x[i].name != x[j].name {
|
||||
return x[i].name < x[j].name
|
||||
}
|
||||
if len(x[i].index) != len(x[j].index) {
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
if x[i].tag != x[j].tag {
|
||||
return x[i].tag
|
||||
}
|
||||
return byIndex(x).Less(i, j)
|
||||
}
|
||||
|
||||
// byIndex sorts field by index sequence.
|
||||
type byIndex []field
|
||||
|
||||
func (x byIndex) Len() int { return len(x) }
|
||||
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
func (x byIndex) Less(i, j int) bool {
|
||||
for k, xik := range x[i].index {
|
||||
if k >= len(x[j].index) {
|
||||
return false
|
||||
}
|
||||
if xik != x[j].index[k] {
|
||||
return xik < x[j].index[k]
|
||||
}
|
||||
}
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
|
||||
// typeFields returns a list of fields that TOML should recognize for the given
|
||||
// type. The algorithm is breadth-first search over the set of structs to
|
||||
// include - the top struct and then any reachable anonymous structs.
|
||||
func typeFields(t reflect.Type) []field {
|
||||
// Anonymous fields to explore at the current level and the next.
|
||||
current := []field{}
|
||||
next := []field{{typ: t}}
|
||||
|
||||
// Count of queued names for current level and the next.
|
||||
var count map[reflect.Type]int
|
||||
var nextCount map[reflect.Type]int
|
||||
|
||||
// Types already visited at an earlier level.
|
||||
visited := map[reflect.Type]bool{}
|
||||
|
||||
// Fields found.
|
||||
var fields []field
|
||||
|
||||
for len(next) > 0 {
|
||||
current, next = next, current[:0]
|
||||
count, nextCount = nextCount, map[reflect.Type]int{}
|
||||
|
||||
for _, f := range current {
|
||||
if visited[f.typ] {
|
||||
continue
|
||||
}
|
||||
visited[f.typ] = true
|
||||
|
||||
// Scan f.typ for fields to include.
|
||||
for i := 0; i < f.typ.NumField(); i++ {
|
||||
sf := f.typ.Field(i)
|
||||
if sf.PkgPath != "" && !sf.Anonymous { // unexported
|
||||
continue
|
||||
}
|
||||
opts := getOptions(sf.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
index := make([]int, len(f.index)+1)
|
||||
copy(index, f.index)
|
||||
index[len(f.index)] = i
|
||||
|
||||
ft := sf.Type
|
||||
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
|
||||
// Follow pointer.
|
||||
ft = ft.Elem()
|
||||
}
|
||||
|
||||
// Record found field and index sequence.
|
||||
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
|
||||
tagged := opts.name != ""
|
||||
name := opts.name
|
||||
if name == "" {
|
||||
name = sf.Name
|
||||
}
|
||||
fields = append(fields, field{name, tagged, index, ft})
|
||||
if count[f.typ] > 1 {
|
||||
// If there were multiple instances, add a second,
|
||||
// so that the annihilation code will see a duplicate.
|
||||
// It only cares about the distinction between 1 or 2,
|
||||
// so don't bother generating any more copies.
|
||||
fields = append(fields, fields[len(fields)-1])
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Record new anonymous struct to explore in next round.
|
||||
nextCount[ft]++
|
||||
if nextCount[ft] == 1 {
|
||||
f := field{name: ft.Name(), index: index, typ: ft}
|
||||
next = append(next, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sort.Sort(byName(fields))
|
||||
|
||||
// Delete all fields that are hidden by the Go rules for embedded fields,
|
||||
// except that fields with TOML tags are promoted.
|
||||
|
||||
// The fields are sorted in primary order of name, secondary order
|
||||
// of field index length. Loop over names; for each name, delete
|
||||
// hidden fields by choosing the one dominant field that survives.
|
||||
out := fields[:0]
|
||||
for advance, i := 0, 0; i < len(fields); i += advance {
|
||||
// One iteration per name.
|
||||
// Find the sequence of fields with the name of this first field.
|
||||
fi := fields[i]
|
||||
name := fi.name
|
||||
for advance = 1; i+advance < len(fields); advance++ {
|
||||
fj := fields[i+advance]
|
||||
if fj.name != name {
|
||||
break
|
||||
}
|
||||
}
|
||||
if advance == 1 { // Only one field with this name
|
||||
out = append(out, fi)
|
||||
continue
|
||||
}
|
||||
dominant, ok := dominantField(fields[i : i+advance])
|
||||
if ok {
|
||||
out = append(out, dominant)
|
||||
}
|
||||
}
|
||||
|
||||
fields = out
|
||||
sort.Sort(byIndex(fields))
|
||||
|
||||
return fields
|
||||
}
|
||||
|
||||
// dominantField looks through the fields, all of which are known to
|
||||
// have the same name, to find the single field that dominates the
|
||||
// others using Go's embedding rules, modified by the presence of
|
||||
// TOML tags. If there are multiple top-level fields, the boolean
|
||||
// will be false: This condition is an error in Go and we skip all
|
||||
// the fields.
|
||||
func dominantField(fields []field) (field, bool) {
|
||||
// The fields are sorted in increasing index-length order. The winner
|
||||
// must therefore be one with the shortest index length. Drop all
|
||||
// longer entries, which is easy: just truncate the slice.
|
||||
length := len(fields[0].index)
|
||||
tagged := -1 // Index of first tagged field.
|
||||
for i, f := range fields {
|
||||
if len(f.index) > length {
|
||||
fields = fields[:i]
|
||||
break
|
||||
}
|
||||
if f.tag {
|
||||
if tagged >= 0 {
|
||||
// Multiple tagged fields at the same level: conflict.
|
||||
// Return no field.
|
||||
return field{}, false
|
||||
}
|
||||
tagged = i
|
||||
}
|
||||
}
|
||||
if tagged >= 0 {
|
||||
return fields[tagged], true
|
||||
}
|
||||
// All remaining fields have the same length. If there's more than one,
|
||||
// we have a conflict (two fields named "X" at the same level) and we
|
||||
// return no field.
|
||||
if len(fields) > 1 {
|
||||
return field{}, false
|
||||
}
|
||||
return fields[0], true
|
||||
}
|
||||
|
||||
var fieldCache struct {
|
||||
sync.RWMutex
|
||||
m map[reflect.Type][]field
|
||||
}
|
||||
|
||||
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
|
||||
func cachedTypeFields(t reflect.Type) []field {
|
||||
fieldCache.RLock()
|
||||
f := fieldCache.m[t]
|
||||
fieldCache.RUnlock()
|
||||
if f != nil {
|
||||
return f
|
||||
}
|
||||
|
||||
// Compute fields without lock.
|
||||
// Might duplicate effort but won't hold other computations back.
|
||||
f = typeFields(t)
|
||||
if f == nil {
|
||||
f = []field{}
|
||||
}
|
||||
|
||||
fieldCache.Lock()
|
||||
if fieldCache.m == nil {
|
||||
fieldCache.m = map[reflect.Type][]field{}
|
||||
}
|
||||
fieldCache.m[t] = f
|
||||
fieldCache.Unlock()
|
||||
return f
|
||||
}
|
||||
65
vendor/github.com/BurntSushi/toml/type_toml.go
generated
vendored
Normal file
65
vendor/github.com/BurntSushi/toml/type_toml.go
generated
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
package toml
|
||||
|
||||
// tomlType represents any Go type that corresponds to a TOML type.
|
||||
// While the first draft of the TOML spec has a simplistic type system that
|
||||
// probably doesn't need this level of sophistication, we seem to be militating
|
||||
// toward adding real composite types.
|
||||
type tomlType interface {
|
||||
typeString() string
|
||||
}
|
||||
|
||||
// typeEqual accepts any two types and returns true if they are equal.
|
||||
func typeEqual(t1, t2 tomlType) bool {
|
||||
if t1 == nil || t2 == nil {
|
||||
return false
|
||||
}
|
||||
return t1.typeString() == t2.typeString()
|
||||
}
|
||||
|
||||
func typeIsTable(t tomlType) bool {
|
||||
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
|
||||
}
|
||||
|
||||
type tomlBaseType string
|
||||
|
||||
func (btype tomlBaseType) typeString() string { return string(btype) }
|
||||
func (btype tomlBaseType) String() string { return btype.typeString() }
|
||||
|
||||
var (
|
||||
tomlInteger tomlBaseType = "Integer"
|
||||
tomlFloat tomlBaseType = "Float"
|
||||
tomlDatetime tomlBaseType = "Datetime"
|
||||
tomlString tomlBaseType = "String"
|
||||
tomlBool tomlBaseType = "Bool"
|
||||
tomlArray tomlBaseType = "Array"
|
||||
tomlHash tomlBaseType = "Hash"
|
||||
tomlArrayHash tomlBaseType = "ArrayHash"
|
||||
)
|
||||
|
||||
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
|
||||
// Primitive values are: Integer, Float, Datetime, String and Bool.
|
||||
//
|
||||
// Passing a lexer item other than the following will cause a BUG message
|
||||
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
|
||||
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
|
||||
switch lexItem.typ {
|
||||
case itemInteger:
|
||||
return tomlInteger
|
||||
case itemFloat:
|
||||
return tomlFloat
|
||||
case itemDatetime:
|
||||
return tomlDatetime
|
||||
case itemString, itemStringEsc:
|
||||
return tomlString
|
||||
case itemMultilineString:
|
||||
return tomlString
|
||||
case itemRawString:
|
||||
return tomlString
|
||||
case itemRawMultilineString:
|
||||
return tomlString
|
||||
case itemBool:
|
||||
return tomlBool
|
||||
}
|
||||
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
|
||||
panic("unreachable")
|
||||
}
|
||||
19
vendor/github.com/foize/go.fifo/LICENSE
generated
vendored
Normal file
19
vendor/github.com/foize/go.fifo/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (C) 2012 Yasushi Saito, 2013 Foize B.V.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
113
vendor/github.com/foize/go.fifo/fifo.go
generated
vendored
Normal file
113
vendor/github.com/foize/go.fifo/fifo.go
generated
vendored
Normal file
@@ -0,0 +1,113 @@
|
||||
// Created by Yaz Saito on 06/15/12.
|
||||
// Modified by Geert-Johan Riemer, Foize B.V.
|
||||
|
||||
// TODO:
|
||||
// - travis CI
|
||||
// - maybe add method (*Queue).Peek()
|
||||
|
||||
package fifo
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
const chunkSize = 64
|
||||
|
||||
// chunks are used to make a queue auto resizeable.
|
||||
type chunk struct {
|
||||
items [chunkSize]interface{} // list of queue'ed items
|
||||
first, last int // positions for the first and list item in this chunk
|
||||
next *chunk // pointer to the next chunk (if any)
|
||||
}
|
||||
|
||||
// fifo queue
|
||||
type Queue struct {
|
||||
head, tail *chunk // chunk head and tail
|
||||
count int // total amount of items in the queue
|
||||
lock sync.Mutex // synchronisation lock
|
||||
}
|
||||
|
||||
// NewQueue creates a new and empty *fifo.Queue
|
||||
func NewQueue() (q *Queue) {
|
||||
initChunk := new(chunk)
|
||||
q = &Queue{
|
||||
head: initChunk,
|
||||
tail: initChunk,
|
||||
}
|
||||
return q
|
||||
}
|
||||
|
||||
// Return the number of items in the queue
|
||||
func (q *Queue) Len() (length int) {
|
||||
// locking to make Queue thread-safe
|
||||
q.lock.Lock()
|
||||
defer q.lock.Unlock()
|
||||
|
||||
// copy q.count and return length
|
||||
length = q.count
|
||||
return length
|
||||
}
|
||||
|
||||
// Add an item to the end of the queue
|
||||
func (q *Queue) Add(item interface{}) {
|
||||
// locking to make Queue thread-safe
|
||||
q.lock.Lock()
|
||||
defer q.lock.Unlock()
|
||||
|
||||
// check if item is valid
|
||||
if item == nil {
|
||||
panic("can not add nil item to fifo queue")
|
||||
}
|
||||
|
||||
// if the tail chunk is full, create a new one and add it to the queue.
|
||||
if q.tail.last >= chunkSize {
|
||||
q.tail.next = new(chunk)
|
||||
q.tail = q.tail.next
|
||||
}
|
||||
|
||||
// add item to the tail chunk at the last position
|
||||
q.tail.items[q.tail.last] = item
|
||||
q.tail.last++
|
||||
q.count++
|
||||
}
|
||||
|
||||
// Remove the item at the head of the queue and return it.
|
||||
// Returns nil when there are no items left in queue.
|
||||
func (q *Queue) Next() (item interface{}) {
|
||||
// locking to make Queue thread-safe
|
||||
q.lock.Lock()
|
||||
defer q.lock.Unlock()
|
||||
|
||||
// Return nil if there are no items to return
|
||||
if q.count == 0 {
|
||||
return nil
|
||||
}
|
||||
// FIXME: why would this check be required?
|
||||
if q.head.first >= q.head.last {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get item from queue
|
||||
item = q.head.items[q.head.first]
|
||||
|
||||
// increment first position and decrement queue item count
|
||||
q.head.first++
|
||||
q.count--
|
||||
|
||||
if q.head.first >= q.head.last {
|
||||
// we're at the end of this chunk and we should do some maintainance
|
||||
// if there are no follow up chunks then reset the current one so it can be used again.
|
||||
if q.count == 0 {
|
||||
q.head.first = 0
|
||||
q.head.last = 0
|
||||
q.head.next = nil
|
||||
} else {
|
||||
// set queue's head chunk to the next chunk
|
||||
// old head will fall out of scope and be GC-ed
|
||||
q.head = q.head.next
|
||||
}
|
||||
}
|
||||
|
||||
// return the retrieved item
|
||||
return item
|
||||
}
|
||||
107
vendor/github.com/foize/go.fifo/readme.md
generated
vendored
Normal file
107
vendor/github.com/foize/go.fifo/readme.md
generated
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
## go.fifo
|
||||
|
||||
### Description
|
||||
go.fifo provides a simple FIFO thread-safe queue.
|
||||
*fifo.Queue supports pushing an item at the end with Add(), and popping an item from the front with Next().
|
||||
There is no intermediate type for the stored data. Data is directly added and retrieved as type interface{}
|
||||
The queue itself is implemented as a single-linked list of chunks containing max 64 items each.
|
||||
|
||||
### Installation
|
||||
`go get github.com/foize/go.fifo`
|
||||
|
||||
### Usage
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/foize/go.fifo"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// create a new queue
|
||||
numbers := fifo.NewQueue()
|
||||
|
||||
// add items to the queue
|
||||
numbers.Add(42)
|
||||
numbers.Add(123)
|
||||
numbers.Add(456)
|
||||
|
||||
// retrieve items from the queue
|
||||
fmt.Println(numbers.Next()) // 42
|
||||
fmt.Println(numbers.Next()) // 123
|
||||
fmt.Println(numbers.Next()) // 456
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/foize/go.fifo"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type thing struct {
|
||||
Text string
|
||||
Number int
|
||||
}
|
||||
|
||||
func main() {
|
||||
// create a new queue
|
||||
things := fifo.NewQueue()
|
||||
|
||||
// add items to the queue
|
||||
things.Add(&thing{
|
||||
Text: "one thing",
|
||||
Number: 1,
|
||||
})
|
||||
things.Add(&thing {
|
||||
Text: "another thing",
|
||||
Number: 2,
|
||||
})
|
||||
|
||||
// retrieve items from the queue
|
||||
for {
|
||||
// get a new item from the things queue
|
||||
item := things.Next();
|
||||
|
||||
// check if there was an item
|
||||
if item == nil {
|
||||
fmt.Println("queue is empty")
|
||||
return
|
||||
}
|
||||
|
||||
// assert the type for the item
|
||||
someThing := item.(*thing)
|
||||
|
||||
// print the fields
|
||||
fmt.Println(someThing.Text)
|
||||
fmt.Printf("with number: %d\n", someThing.Number)
|
||||
}
|
||||
}
|
||||
|
||||
/* output: */
|
||||
// one thing
|
||||
// with number: 1
|
||||
// another thing
|
||||
// with number: 2
|
||||
// queue is empty
|
||||
```
|
||||
|
||||
### Documentation
|
||||
Documentation can be found at [godoc.org/github.com/foize/go.fifo](http://godoc.org/github.com/foize/go.fifo).
|
||||
For more detailed documentation, read the source.
|
||||
|
||||
### History
|
||||
This package is based on github.com/yasushi-saito/fifo_queue
|
||||
There are several differences:
|
||||
- renamed package to `fifo` to make usage simpler
|
||||
- removed intermediate type `Item` and now directly using interface{} instead.
|
||||
- renamed (*Queue).PushBack() to (*Queue).Add()
|
||||
- renamed (*Queue).PopFront() to (*Queue).Next()
|
||||
- Next() will not panic on empty queue, will just return nil interface{}
|
||||
- Add() does not accept nil interface{} and will panic when trying to add nil interface{}.
|
||||
- Made fifo.Queue thread/goroutine-safe (sync.Mutex)
|
||||
- Added a lot of comments
|
||||
- renamed internal variable/field names
|
||||
13
vendor/github.com/getsentry/sentry-go/.codecov.yml
generated
vendored
Normal file
13
vendor/github.com/getsentry/sentry-go/.codecov.yml
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
codecov:
|
||||
# across
|
||||
notify:
|
||||
# Do not notify until at least this number of reports have been uploaded
|
||||
# from the CI pipeline. We normally have more than that number, but 6
|
||||
# should be enough to get a first notification.
|
||||
after_n_builds: 6
|
||||
coverage:
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
# Do not fail the commit status if the coverage was reduced up to this value
|
||||
threshold: 0.5%
|
||||
40
vendor/github.com/getsentry/sentry-go/.craft.yml
generated
vendored
Normal file
40
vendor/github.com/getsentry/sentry-go/.craft.yml
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
minVersion: 0.35.0
|
||||
changelogPolicy: simple
|
||||
artifactProvider:
|
||||
name: none
|
||||
targets:
|
||||
- name: github
|
||||
tagPrefix: v
|
||||
- name: github
|
||||
tagPrefix: otel/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: echo/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: fasthttp/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: fiber/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: gin/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: iris/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: negroni/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: logrus/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: slog/v
|
||||
tagOnly: true
|
||||
- name: github
|
||||
tagPrefix: zerolog/v
|
||||
tagOnly: true
|
||||
- name: registry
|
||||
sdks:
|
||||
github:getsentry/sentry-go:
|
||||
5
vendor/github.com/getsentry/sentry-go/.gitattributes
generated
vendored
Normal file
5
vendor/github.com/getsentry/sentry-go/.gitattributes
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# Tell Git to use LF for line endings on all platforms.
|
||||
# Required to have correct test data on Windows.
|
||||
# https://github.com/mvdan/github-actions-golang#caveats
|
||||
# https://github.com/actions/checkout/issues/135#issuecomment-613361104
|
||||
* text eol=lf
|
||||
14
vendor/github.com/getsentry/sentry-go/.gitignore
generated
vendored
Normal file
14
vendor/github.com/getsentry/sentry-go/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
# Code coverage artifacts
|
||||
coverage.txt
|
||||
coverage.out
|
||||
coverage.html
|
||||
.coverage/
|
||||
|
||||
# Just my personal way of tracking stuff — Kamil
|
||||
FIXME.md
|
||||
TODO.md
|
||||
!NOTES.md
|
||||
|
||||
# IDE system files
|
||||
.idea
|
||||
.vscode
|
||||
46
vendor/github.com/getsentry/sentry-go/.golangci.yml
generated
vendored
Normal file
46
vendor/github.com/getsentry/sentry-go/.golangci.yml
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- bodyclose
|
||||
- dogsled
|
||||
- dupl
|
||||
- errcheck
|
||||
- exportloopref
|
||||
- gochecknoinits
|
||||
- goconst
|
||||
- gocritic
|
||||
- gocyclo
|
||||
- godot
|
||||
- gofmt
|
||||
- goimports
|
||||
- gosec
|
||||
- gosimple
|
||||
- govet
|
||||
- ineffassign
|
||||
- misspell
|
||||
- nakedret
|
||||
- prealloc
|
||||
- revive
|
||||
- staticcheck
|
||||
- typecheck
|
||||
- unconvert
|
||||
- unparam
|
||||
- unused
|
||||
- whitespace
|
||||
issues:
|
||||
exclude-rules:
|
||||
- path: _test\.go
|
||||
linters:
|
||||
- goconst
|
||||
- prealloc
|
||||
- path: _test\.go
|
||||
text: "G306:"
|
||||
linters:
|
||||
- gosec
|
||||
- path: errors_test\.go
|
||||
linters:
|
||||
- unused
|
||||
- path: http/example_test\.go
|
||||
linters:
|
||||
- errcheck
|
||||
- bodyclose
|
||||
960
vendor/github.com/getsentry/sentry-go/CHANGELOG.md
generated
vendored
Normal file
960
vendor/github.com/getsentry/sentry-go/CHANGELOG.md
generated
vendored
Normal file
@@ -0,0 +1,960 @@
|
||||
# Changelog
|
||||
|
||||
## 0.31.1
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.31.1.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Correct wrong module name for `sentry-go/logrus` ([#950](https://github.com/getsentry/sentry-go/pull/950))
|
||||
|
||||
## 0.31.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.31.0.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- Remove support for metrics. Read more about the end of the Metrics beta [here](https://sentry.zendesk.com/hc/en-us/articles/26369339769883-Metrics-Beta-Ended-on-October-7th). ([#914](https://github.com/getsentry/sentry-go/pull/914))
|
||||
|
||||
- Remove support for profiling. ([#915](https://github.com/getsentry/sentry-go/pull/915))
|
||||
|
||||
- Remove `Segment` field from the `User` struct. This field is no longer used in the Sentry product. ([#928](https://github.com/getsentry/sentry-go/pull/928))
|
||||
|
||||
- Every integration is now a separate module, reducing the binary size and number of dependencies. Once you update `sentry-go` to latest version, you'll need to `go get` the integration you want to use. For example, if you want to use the `echo` integration, you'll need to run `go get github.com/getsentry/sentry-go/echo` ([#919](github.com/getsentry/sentry-go/pull/919)).
|
||||
|
||||
### Features
|
||||
|
||||
Add the ability to override `hub` in `context` for integrations that use custom context. ([#931](https://github.com/getsentry/sentry-go/pull/931))
|
||||
|
||||
- Add `HubProvider` Hook for `sentrylogrus`, enabling dynamic Sentry hub allocation for each log entry or goroutine. ([#936](https://github.com/getsentry/sentry-go/pull/936))
|
||||
|
||||
This change enhances compatibility with Sentry's recommendation of using separate hubs per goroutine. To ensure a separate Sentry hub for each goroutine, configure the `HubProvider` like this:
|
||||
|
||||
```go
|
||||
hook, err := sentrylogrus.New(nil, sentry.ClientOptions{})
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to initialize Sentry hook: %v", err)
|
||||
}
|
||||
|
||||
// Set a custom HubProvider to generate a new hub for each goroutine or log entry
|
||||
hook.SetHubProvider(func() *sentry.Hub {
|
||||
client, _ := sentry.NewClient(sentry.ClientOptions{})
|
||||
return sentry.NewHub(client, sentry.NewScope())
|
||||
})
|
||||
|
||||
logrus.AddHook(hook)
|
||||
```
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Add support for closing worker goroutines started by the `HTTPTranport` to prevent goroutine leaks. ([#894](https://github.com/getsentry/sentry-go/pull/894))
|
||||
|
||||
```go
|
||||
client, _ := sentry.NewClient()
|
||||
defer client.Close()
|
||||
```
|
||||
|
||||
Worker can be also closed by calling `Close()` method on the `HTTPTransport` instance. `Close` should be called after `Flush` and before terminating the program otherwise some events may be lost.
|
||||
|
||||
```go
|
||||
transport := sentry.NewHTTPTransport()
|
||||
defer transport.Close()
|
||||
```
|
||||
|
||||
### Misc
|
||||
|
||||
- Bump [gin-gonic/gin](https://github.com/gin-gonic/gin) to v1.9.1. ([#946](https://github.com/getsentry/sentry-go/pull/946))
|
||||
|
||||
## 0.30.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.30.0.
|
||||
|
||||
### Features
|
||||
|
||||
- Add `sentryzerolog` integration ([#857](https://github.com/getsentry/sentry-go/pull/857))
|
||||
- Add `sentryslog` integration ([#865](https://github.com/getsentry/sentry-go/pull/865))
|
||||
- Always set Mechanism Type to generic ([#896](https://github.com/getsentry/sentry-go/pull/897))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Prevent panic in `fasthttp` and `fiber` integration in case a malformed URL has to be parsed ([#912](https://github.com/getsentry/sentry-go/pull/912))
|
||||
|
||||
### Misc
|
||||
|
||||
Drop support for Go 1.18, 1.19 and 1.20. The currently supported Go versions are the last 3 stable releases: 1.23, 1.22 and 1.21.
|
||||
|
||||
## 0.29.1
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.29.1.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Correlate errors to the current trace ([#886](https://github.com/getsentry/sentry-go/pull/886))
|
||||
- Set the trace context when the transaction finishes ([#888](https://github.com/getsentry/sentry-go/pull/888))
|
||||
|
||||
### Misc
|
||||
|
||||
- Update the `sentrynegroni` integration to use the latest (v3.1.1) version of Negroni ([#885](https://github.com/getsentry/sentry-go/pull/885))
|
||||
|
||||
## 0.29.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.29.0.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- Remove the `sentrymartini` integration ([#861](https://github.com/getsentry/sentry-go/pull/861))
|
||||
- The `WrapResponseWriter` has been moved from the `sentryhttp` package to the `internal/httputils` package. If you've imported it previosuly, you'll need to copy the implementation in your project. ([#871](https://github.com/getsentry/sentry-go/pull/871))
|
||||
|
||||
### Features
|
||||
|
||||
- Add new convenience methods to continue a trace and propagate tracing headers for error-only use cases. ([#862](https://github.com/getsentry/sentry-go/pull/862))
|
||||
|
||||
If you are not using one of our integrations, you can manually continue an incoming trace by using `sentry.ContinueTrace()` by providing the `sentry-trace` and `baggage` header received from a downstream SDK.
|
||||
|
||||
```go
|
||||
hub := sentry.CurrentHub()
|
||||
sentry.ContinueTrace(hub, r.Header.Get(sentry.SentryTraceHeader), r.Header.Get(sentry.SentryBaggageHeader)),
|
||||
```
|
||||
|
||||
You can use `hub.GetTraceparent()` and `hub.GetBaggage()` to fetch the necessary header values for outgoing HTTP requests.
|
||||
|
||||
```go
|
||||
hub := sentry.GetHubFromContext(ctx)
|
||||
req, _ := http.NewRequest("GET", "http://localhost:3000", nil)
|
||||
req.Header.Add(sentry.SentryTraceHeader, hub.GetTraceparent())
|
||||
req.Header.Add(sentry.SentryBaggageHeader, hub.GetBaggage())
|
||||
```
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Initialize `HTTPTransport.limit` if `nil` ([#844](https://github.com/getsentry/sentry-go/pull/844))
|
||||
- Fix `sentry.StartTransaction()` returning a transaction with an outdated context on existing transactions ([#854](https://github.com/getsentry/sentry-go/pull/854))
|
||||
- Treat `Proxy-Authorization` as a sensitive header ([#859](https://github.com/getsentry/sentry-go/pull/859))
|
||||
- Add support for the `http.Hijacker` interface to the `sentrynegroni` package ([#871](https://github.com/getsentry/sentry-go/pull/871))
|
||||
- Go version >= 1.23: Use value from `http.Request.Pattern` for HTTP transaction names when using `sentryhttp` & `sentrynegroni` ([#875](https://github.com/getsentry/sentry-go/pull/875))
|
||||
- Go version >= 1.21: Fix closure functions name grouping ([#877](https://github.com/getsentry/sentry-go/pull/877))
|
||||
|
||||
### Misc
|
||||
|
||||
- Collect `span` origins ([#849](https://github.com/getsentry/sentry-go/pull/849))
|
||||
|
||||
## 0.28.1
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.28.1.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Implement `http.ResponseWriter` to hook into various parts of the response process ([#837](https://github.com/getsentry/sentry-go/pull/837))
|
||||
|
||||
## 0.28.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.28.0.
|
||||
|
||||
### Features
|
||||
|
||||
- Add a `Fiber` performance tracing & error reporting integration ([#795](https://github.com/getsentry/sentry-go/pull/795))
|
||||
- Add performance tracing to the `Echo` integration ([#722](https://github.com/getsentry/sentry-go/pull/722))
|
||||
- Add performance tracing to the `FastHTTP` integration ([#732](https://github.com/getsentry/sentry-go/pull/723))
|
||||
- Add performance tracing to the `Iris` integration ([#809](https://github.com/getsentry/sentry-go/pull/809))
|
||||
- Add performance tracing to the `Negroni` integration ([#808](https://github.com/getsentry/sentry-go/pull/808))
|
||||
- Add `FailureIssueThreshold` & `RecoveryThreshold` to `MonitorConfig` ([#775](https://github.com/getsentry/sentry-go/pull/775))
|
||||
- Use `errors.Unwrap()` to create exception groups ([#792](https://github.com/getsentry/sentry-go/pull/792))
|
||||
- Add support for matching on strings for `ClientOptions.IgnoreErrors` & `ClientOptions.IgnoreTransactions` ([#819](https://github.com/getsentry/sentry-go/pull/819))
|
||||
- Add `http.request.method` attribute for performance span data ([#786](https://github.com/getsentry/sentry-go/pull/786))
|
||||
- Accept `interface{}` for span data values ([#784](https://github.com/getsentry/sentry-go/pull/784))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix missing stack trace for parsing error in `logrusentry` ([#689](https://github.com/getsentry/sentry-go/pull/689))
|
||||
|
||||
## 0.27.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.27.0.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- `Exception.ThreadId` is now typed as `uint64`. It was wrongly typed as `string` before. ([#770](https://github.com/getsentry/sentry-go/pull/770))
|
||||
|
||||
### Misc
|
||||
|
||||
- Export `Event.Attachments` ([#771](https://github.com/getsentry/sentry-go/pull/771))
|
||||
|
||||
## 0.26.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.26.0.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
As previously announced, this release removes some methods from the SDK.
|
||||
|
||||
- `sentry.TransactionName()` use `sentry.WithTransactionName()` instead.
|
||||
- `sentry.OpName()` use `sentry.WithOpName()` instead.
|
||||
- `sentry.TransctionSource()` use `sentry.WithTransactionSource()` instead.
|
||||
- `sentry.SpanSampled()` use `sentry.WithSpanSampled()` instead.
|
||||
|
||||
### Features
|
||||
|
||||
- Add `WithDescription` span option ([#751](https://github.com/getsentry/sentry-go/pull/751))
|
||||
|
||||
```go
|
||||
span := sentry.StartSpan(ctx, "http.client", WithDescription("GET /api/users"))
|
||||
```
|
||||
- Add support for package name parsing in Go 1.20 and higher ([#730](https://github.com/getsentry/sentry-go/pull/730))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Apply `ClientOptions.SampleRate` only to errors & messages ([#754](https://github.com/getsentry/sentry-go/pull/754))
|
||||
- Check if git is available before executing any git commands ([#737](https://github.com/getsentry/sentry-go/pull/737))
|
||||
|
||||
## 0.25.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.25.0.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
As previously announced, this release removes two global constants from the SDK.
|
||||
|
||||
- `sentry.Version` was removed. Use `sentry.SDKVersion` instead ([#727](https://github.com/getsentry/sentry-go/pull/727))
|
||||
- `sentry.SDKIdentifier` was removed. Use `Client.GetSDKIdentifier()` instead ([#727](https://github.com/getsentry/sentry-go/pull/727))
|
||||
|
||||
### Features
|
||||
|
||||
- Add `ClientOptions.IgnoreTransactions`, which allows you to ignore specific transactions based on their name ([#717](https://github.com/getsentry/sentry-go/pull/717))
|
||||
- Add `ClientOptions.Tags`, which allows you to set global tags that are applied to all events. You can also define tags by setting `SENTRY_TAGS_` environment variables ([#718](https://github.com/getsentry/sentry-go/pull/718))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Fix an issue in the profiler that would cause an infinite loop if the duration of a transaction is longer than 30 seconds ([#724](https://github.com/getsentry/sentry-go/issues/724))
|
||||
|
||||
### Misc
|
||||
|
||||
- `dsn.RequestHeaders()` is not to be removed, though it is still considered deprecated and should only be used when using a custom transport that sends events to the `/store` endpoint ([#720](https://github.com/getsentry/sentry-go/pull/720))
|
||||
|
||||
## 0.24.1
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.24.1.
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Prevent a panic in `sentryotel.flushSpanProcessor()` ([(#711)](https://github.com/getsentry/sentry-go/pull/711))
|
||||
- Prevent a panic when setting the SDK identifier ([#715](https://github.com/getsentry/sentry-go/pull/715))
|
||||
|
||||
## 0.24.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.24.0.
|
||||
|
||||
### Deprecations
|
||||
|
||||
- `sentry.Version` to be removed in 0.25.0. Use `sentry.SDKVersion` instead.
|
||||
- `sentry.SDKIdentifier` to be removed in 0.25.0. Use `Client.GetSDKIdentifier()` instead.
|
||||
- `dsn.RequestHeaders()` to be removed after 0.25.0, but no earlier than December 1, 2023. Requests to the `/envelope` endpoint are authenticated using the DSN in the envelope header.
|
||||
|
||||
### Features
|
||||
|
||||
- Run a single instance of the profiler instead of multiple ones for each Go routine ([#655](https://github.com/getsentry/sentry-go/pull/655))
|
||||
- Use the route path as the transaction names when using the Gin integration ([#675](https://github.com/getsentry/sentry-go/pull/675))
|
||||
- Set the SDK name accordingly when a framework integration is used ([#694](https://github.com/getsentry/sentry-go/pull/694))
|
||||
- Read release information (VCS revision) from `debug.ReadBuildInfo` ([#704](https://github.com/getsentry/sentry-go/pull/704))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- [otel] Fix incorrect usage of `attributes.Value.AsString` ([#684](https://github.com/getsentry/sentry-go/pull/684))
|
||||
- Fix trace function name parsing in profiler on go1.21+ ([#695](https://github.com/getsentry/sentry-go/pull/695))
|
||||
|
||||
### Misc
|
||||
|
||||
- Test against Go 1.21 ([#695](https://github.com/getsentry/sentry-go/pull/695))
|
||||
- Make tests more robust ([#698](https://github.com/getsentry/sentry-go/pull/698), [#699](https://github.com/getsentry/sentry-go/pull/699), [#700](https://github.com/getsentry/sentry-go/pull/700), [#702](https://github.com/getsentry/sentry-go/pull/702))
|
||||
|
||||
## 0.23.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.23.0.
|
||||
|
||||
### Features
|
||||
|
||||
- Initial support for [Cron Monitoring](https://docs.sentry.io/product/crons/) ([#661](https://github.com/getsentry/sentry-go/pull/661))
|
||||
|
||||
This is how the basic usage of the feature looks like:
|
||||
|
||||
```go
|
||||
// 🟡 Notify Sentry your job is running:
|
||||
checkinId := sentry.CaptureCheckIn(
|
||||
&sentry.CheckIn{
|
||||
MonitorSlug: "<monitor-slug>",
|
||||
Status: sentry.CheckInStatusInProgress,
|
||||
},
|
||||
nil,
|
||||
)
|
||||
|
||||
// Execute your scheduled task here...
|
||||
|
||||
// 🟢 Notify Sentry your job has completed successfully:
|
||||
sentry.CaptureCheckIn(
|
||||
&sentry.CheckIn{
|
||||
ID: *checkinId,
|
||||
MonitorSlug: "<monitor-slug>",
|
||||
Status: sentry.CheckInStatusOK,
|
||||
},
|
||||
nil,
|
||||
)
|
||||
```
|
||||
|
||||
A full example of using Crons Monitoring is available [here](https://github.com/getsentry/sentry-go/blob/dde4d360660838f3c2e0ced8205bc8f7a8d312d9/_examples/crons/main.go).
|
||||
|
||||
More documentation on configuring and using Crons [can be found here](https://docs.sentry.io/platforms/go/crons/).
|
||||
|
||||
- Add support for [Event Attachments](https://docs.sentry.io/platforms/go/enriching-events/attachments/) ([#670](https://github.com/getsentry/sentry-go/pull/670))
|
||||
|
||||
It's now possible to add file/binary payloads to Sentry events:
|
||||
|
||||
```go
|
||||
sentry.ConfigureScope(func(scope *sentry.Scope) {
|
||||
scope.AddAttachment(&Attachment{
|
||||
Filename: "report.html",
|
||||
ContentType: "text/html",
|
||||
Payload: []byte("<h1>Look, HTML</h1>"),
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
The attachment will then be accessible on the Issue Details page.
|
||||
|
||||
- Add sampling decision to trace envelope header ([#666](https://github.com/getsentry/sentry-go/pull/666))
|
||||
- Expose SpanFromContext function ([#672](https://github.com/getsentry/sentry-go/pull/672))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Make `Span.Finish` a no-op when the span is already finished ([#660](https://github.com/getsentry/sentry-go/pull/660))
|
||||
|
||||
## 0.22.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.22.0.
|
||||
|
||||
This release contains initial [profiling](https://docs.sentry.io/product/profiling/) support, as well as a few bug fixes and improvements.
|
||||
|
||||
### Features
|
||||
|
||||
- Initial (alpha) support for [profiling](https://docs.sentry.io/product/profiling/) ([#626](https://github.com/getsentry/sentry-go/pull/626))
|
||||
|
||||
Profiling is disabled by default. To enable it, configure both `TracesSampleRate` and `ProfilesSampleRate` when initializing the SDK:
|
||||
|
||||
```go
|
||||
err := sentry.Init(sentry.ClientOptions{
|
||||
Dsn: "__DSN__",
|
||||
EnableTracing: true,
|
||||
TracesSampleRate: 1.0,
|
||||
// The sampling rate for profiling is relative to TracesSampleRate. In this case, we'll capture profiles for 100% of transactions.
|
||||
ProfilesSampleRate: 1.0,
|
||||
})
|
||||
```
|
||||
|
||||
More documentation on profiling and current limitations [can be found here](https://docs.sentry.io/platforms/go/profiling/).
|
||||
|
||||
- Add transactions/tracing support go the Gin integration ([#644](https://github.com/getsentry/sentry-go/pull/644))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
- Always set a valid source on transactions ([#637](https://github.com/getsentry/sentry-go/pull/637))
|
||||
- Clone scope.Context in more places to avoid panics on concurrent reads and writes ([#638](https://github.com/getsentry/sentry-go/pull/638))
|
||||
- Fixes [#570](https://github.com/getsentry/sentry-go/issues/570)
|
||||
- Fix frames recognized as not being in-app still showing as in-app ([#647](https://github.com/getsentry/sentry-go/pull/647))
|
||||
|
||||
## 0.21.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.21.0.
|
||||
|
||||
Note: this release includes one **breaking change** and some **deprecations**, which are listed below.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
**This change does not apply if you use [https://sentry.io](https://sentry.io)**
|
||||
|
||||
- Remove support for the `/store` endpoint ([#631](https://github.com/getsentry/sentry-go/pull/631))
|
||||
- This change requires a self-hosted version of Sentry 20.6.0 or higher. If you are using a version of [self-hosted Sentry](https://develop.sentry.dev/self-hosted/) (aka *on-premise*) older than 20.6.0, then you will need to [upgrade](https://develop.sentry.dev/self-hosted/releases/) your instance.
|
||||
|
||||
### Features
|
||||
|
||||
- Rename four span option functions ([#611](https://github.com/getsentry/sentry-go/pull/611), [#624](https://github.com/getsentry/sentry-go/pull/624))
|
||||
- `TransctionSource` -> `WithTransactionSource`
|
||||
- `SpanSampled` -> `WithSpanSampled`
|
||||
- `OpName` -> `WithOpName`
|
||||
- `TransactionName` -> `WithTransactionName`
|
||||
- Old functions `TransctionSource`, `SpanSampled`, `OpName`, and `TransactionName` are still available but are now **deprecated** and will be removed in a future release.
|
||||
- Make `client.EventFromMessage` and `client.EventFromException` methods public ([#607](https://github.com/getsentry/sentry-go/pull/607))
|
||||
- Add `client.SetException` method ([#607](https://github.com/getsentry/sentry-go/pull/607))
|
||||
- This allows to set or add errors to an existing `Event`.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Protect from panics while doing concurrent reads/writes to Span data fields ([#609](https://github.com/getsentry/sentry-go/pull/609))
|
||||
- [otel] Improve detection of Sentry-related spans ([#632](https://github.com/getsentry/sentry-go/pull/632), [#636](https://github.com/getsentry/sentry-go/pull/636))
|
||||
- Fixes cases when HTTP spans containing requests to Sentry were captured by Sentry ([#627](https://github.com/getsentry/sentry-go/issues/627))
|
||||
|
||||
### Misc
|
||||
|
||||
- Drop testing in (legacy) GOPATH mode ([#618](https://github.com/getsentry/sentry-go/pull/618))
|
||||
- Remove outdated documentation from https://pkg.go.dev/github.com/getsentry/sentry-go ([#623](https://github.com/getsentry/sentry-go/pull/623))
|
||||
|
||||
## 0.20.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.20.0.
|
||||
|
||||
Note: this release has some **breaking changes**, which are listed below.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- Remove the following methods: `Scope.SetTransaction()`, `Scope.Transaction()` ([#605](https://github.com/getsentry/sentry-go/pull/605))
|
||||
|
||||
Span.Name should be used instead to access the transaction's name.
|
||||
|
||||
For example, the following [`TracesSampler`](https://docs.sentry.io/platforms/go/configuration/sampling/#setting-a-sampling-function) function should be now written as follows:
|
||||
|
||||
**Before:**
|
||||
```go
|
||||
TracesSampler: func(ctx sentry.SamplingContext) float64 {
|
||||
hub := sentry.GetHubFromContext(ctx.Span.Context())
|
||||
if hub.Scope().Transaction() == "GET /health" {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
},
|
||||
```
|
||||
|
||||
**After:**
|
||||
```go
|
||||
TracesSampler: func(ctx sentry.SamplingContext) float64 {
|
||||
if ctx.Span.Name == "GET /health" {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
},
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- Add `Span.SetContext()` method ([#599](https://github.com/getsentry/sentry-go/pull/599/))
|
||||
- It is recommended to use it instead of `hub.Scope().SetContext` when setting or updating context on transactions.
|
||||
- Add `DebugMeta` interface to `Event` and extend `Frame` structure with more fields ([#606](https://github.com/getsentry/sentry-go/pull/606))
|
||||
- More about DebugMeta interface [here](https://develop.sentry.dev/sdk/event-payloads/debugmeta/).
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- [otel] Fix missing OpenTelemetry context on some events ([#599](https://github.com/getsentry/sentry-go/pull/599), [#605](https://github.com/getsentry/sentry-go/pull/605))
|
||||
- Fixes ([#596](https://github.com/getsentry/sentry-go/issues/596)).
|
||||
- [otel] Better handling for HTTP span attributes ([#610](https://github.com/getsentry/sentry-go/pull/610))
|
||||
|
||||
### Misc
|
||||
|
||||
- Bump minimum versions: `github.com/kataras/iris/v12` to 12.2.0, `github.com/labstack/echo/v4` to v4.10.0 ([#595](https://github.com/getsentry/sentry-go/pull/595))
|
||||
- Resolves [GO-2022-1144 / CVE-2022-41717](https://deps.dev/advisory/osv/GO-2022-1144), [GO-2023-1495 / CVE-2022-41721](https://deps.dev/advisory/osv/GO-2023-1495), [GO-2022-1059 / CVE-2022-32149](https://deps.dev/advisory/osv/GO-2022-1059).
|
||||
- Bump `google.golang.org/protobuf` minimum required version to 1.29.1 ([#604](https://github.com/getsentry/sentry-go/pull/604))
|
||||
- This fixes a potential denial of service issue ([CVE-2023-24535](https://github.com/advisories/GHSA-hw7c-3rfg-p46j)).
|
||||
- Exclude the `otel` module when building in GOPATH mode ([#615](https://github.com/getsentry/sentry-go/pull/615))
|
||||
|
||||
## 0.19.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.19.0.
|
||||
|
||||
### Features
|
||||
|
||||
- Add support for exception mechanism metadata ([#564](https://github.com/getsentry/sentry-go/pull/564/))
|
||||
- More about exception mechanisms [here](https://develop.sentry.dev/sdk/event-payloads/exception/#exception-mechanism).
|
||||
|
||||
### Bug Fixes
|
||||
- [otel] Use the correct "trace" context when sending a Sentry error ([#580](https://github.com/getsentry/sentry-go/pull/580/))
|
||||
|
||||
|
||||
### Misc
|
||||
- Drop support for Go 1.17, add support for Go 1.20 ([#563](https://github.com/getsentry/sentry-go/pull/563/))
|
||||
- According to our policy, we're officially supporting the last three minor releases of Go.
|
||||
- Switch repository license to MIT ([#583](https://github.com/getsentry/sentry-go/pull/583/))
|
||||
- More about Sentry licensing [here](https://open.sentry.io/licensing/).
|
||||
- Bump `golang.org/x/text` minimum required version to 0.3.8 ([#586](https://github.com/getsentry/sentry-go/pull/586))
|
||||
- This fixes [CVE-2022-32149](https://github.com/advisories/GHSA-69ch-w2m2-3vjp) vulnerability.
|
||||
|
||||
## 0.18.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.18.0.
|
||||
This release contains initial support for [OpenTelemetry](https://opentelemetry.io/) and various other bug fixes and improvements.
|
||||
|
||||
**Note**: This is the last release supporting Go 1.17.
|
||||
|
||||
### Features
|
||||
|
||||
- Initial support for [OpenTelemetry](https://opentelemetry.io/).
|
||||
You can now send all your OpenTelemetry spans to Sentry.
|
||||
|
||||
Install the `otel` module
|
||||
|
||||
```bash
|
||||
go get github.com/getsentry/sentry-go \
|
||||
github.com/getsentry/sentry-go/otel
|
||||
```
|
||||
|
||||
Configure the Sentry and OpenTelemetry SDKs
|
||||
|
||||
```go
|
||||
import (
|
||||
"go.opentelemetry.io/otel"
|
||||
sdktrace "go.opentelemetry.io/otel/sdk/trace"
|
||||
"github.com/getsentry/sentry-go"
|
||||
"github.com/getsentry/sentry-go/otel"
|
||||
// ...
|
||||
)
|
||||
|
||||
// Initlaize the Sentry SDK
|
||||
sentry.Init(sentry.ClientOptions{
|
||||
Dsn: "__DSN__",
|
||||
EnableTracing: true,
|
||||
TracesSampleRate: 1.0,
|
||||
})
|
||||
|
||||
// Set up the Sentry span processor
|
||||
tp := sdktrace.NewTracerProvider(
|
||||
sdktrace.WithSpanProcessor(sentryotel.NewSentrySpanProcessor()),
|
||||
// ...
|
||||
)
|
||||
otel.SetTracerProvider(tp)
|
||||
|
||||
// Set up the Sentry propagator
|
||||
otel.SetTextMapPropagator(sentryotel.NewSentryPropagator())
|
||||
```
|
||||
|
||||
You can read more about using OpenTelemetry with Sentry in our [docs](https://docs.sentry.io/platforms/go/performance/instrumentation/opentelemetry/).
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Do not freeze the Dynamic Sampling Context when no Sentry values are present in the baggage header ([#532](https://github.com/getsentry/sentry-go/pull/532))
|
||||
- Create a frozen Dynamic Sampling Context when calling `span.ToBaggage()` ([#566](https://github.com/getsentry/sentry-go/pull/566))
|
||||
- Fix baggage parsing and encoding in vendored otel package ([#568](https://github.com/getsentry/sentry-go/pull/568))
|
||||
|
||||
### Misc
|
||||
|
||||
- Add `Span.SetDynamicSamplingContext()` ([#539](https://github.com/getsentry/sentry-go/pull/539/))
|
||||
- Add various getters for `Dsn` ([#540](https://github.com/getsentry/sentry-go/pull/540))
|
||||
- Add `SpanOption::SpanSampled` ([#546](https://github.com/getsentry/sentry-go/pull/546))
|
||||
- Add `Span.SetData()` ([#542](https://github.com/getsentry/sentry-go/pull/542))
|
||||
- Add `Span.IsTransaction()` ([#543](https://github.com/getsentry/sentry-go/pull/543))
|
||||
- Add `Span.GetTransaction()` method ([#558](https://github.com/getsentry/sentry-go/pull/558))
|
||||
|
||||
## 0.17.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.17.0.
|
||||
This release contains a new `BeforeSendTransaction` hook option and corrects two regressions introduced in `0.16.0`.
|
||||
|
||||
### Features
|
||||
|
||||
- Add `BeforeSendTransaction` hook to `ClientOptions` ([#517](https://github.com/getsentry/sentry-go/pull/517))
|
||||
- Here's [an example](https://github.com/getsentry/sentry-go/blob/master/_examples/http/main.go#L56-L66) of how BeforeSendTransaction can be used to modify or drop transaction events.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Do not crash in Span.Finish() when the Client is empty [#520](https://github.com/getsentry/sentry-go/pull/520)
|
||||
- Fixes [#518](https://github.com/getsentry/sentry-go/issues/518)
|
||||
- Attach non-PII/non-sensitive request headers to events when `ClientOptions.SendDefaultPii` is set to `false` ([#524](https://github.com/getsentry/sentry-go/pull/524))
|
||||
- Fixes [#523](https://github.com/getsentry/sentry-go/issues/523)
|
||||
|
||||
### Misc
|
||||
|
||||
- Clarify how to handle logrus.Fatalf events ([#501](https://github.com/getsentry/sentry-go/pull/501/))
|
||||
- Rename the `examples` directory to `_examples` ([#521](https://github.com/getsentry/sentry-go/pull/521))
|
||||
- This removes an indirect dependency to `github.com/golang-jwt/jwt`
|
||||
|
||||
## 0.16.0
|
||||
|
||||
The Sentry SDK team is happy to announce the immediate availability of Sentry Go SDK v0.16.0.
|
||||
Due to ongoing work towards a stable API for `v1.0.0`, we sadly had to include **two breaking changes** in this release.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- Add `EnableTracing`, a boolean option flag to enable performance monitoring (`false` by default).
|
||||
- If you're using `TracesSampleRate` or `TracesSampler`, this option is **required** to enable performance monitoring.
|
||||
|
||||
```go
|
||||
sentry.Init(sentry.ClientOptions{
|
||||
EnableTracing: true,
|
||||
TracesSampleRate: 1.0,
|
||||
})
|
||||
```
|
||||
- Unify TracesSampler [#498](https://github.com/getsentry/sentry-go/pull/498)
|
||||
- `TracesSampler` was changed to a callback that must return a `float64` between `0.0` and `1.0`.
|
||||
|
||||
For example, you can apply a sample rate of `1.0` (100%) to all `/api` transactions, and a sample rate of `0.5` (50%) to all other transactions.
|
||||
You can read more about this in our [SDK docs](https://docs.sentry.io/platforms/go/configuration/filtering/#using-sampling-to-filter-transaction-events).
|
||||
|
||||
```go
|
||||
sentry.Init(sentry.ClientOptions{
|
||||
TracesSampler: sentry.TracesSampler(func(ctx sentry.SamplingContext) float64 {
|
||||
hub := sentry.GetHubFromContext(ctx.Span.Context())
|
||||
name := hub.Scope().Transaction()
|
||||
|
||||
if strings.HasPrefix(name, "GET /api") {
|
||||
return 1.0
|
||||
}
|
||||
|
||||
return 0.5
|
||||
}),
|
||||
}
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- Send errors logged with [Logrus](https://github.com/sirupsen/logrus) to Sentry.
|
||||
- Have a look at our [logrus examples](https://github.com/getsentry/sentry-go/blob/master/_examples/logrus/main.go) on how to use the integration.
|
||||
- Add support for Dynamic Sampling [#491](https://github.com/getsentry/sentry-go/pull/491)
|
||||
- You can read more about Dynamic Sampling in our [product docs](https://docs.sentry.io/product/data-management-settings/dynamic-sampling/).
|
||||
- Add detailed logging about the reason transactions are being dropped.
|
||||
- You can enable SDK logging via `sentry.ClientOptions.Debug: true`.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Do not clone the hub when calling `StartTransaction` [#505](https://github.com/getsentry/sentry-go/pull/505)
|
||||
- Fixes [#502](https://github.com/getsentry/sentry-go/issues/502)
|
||||
|
||||
## 0.15.0
|
||||
|
||||
- fix: Scope values should not override Event values (#446)
|
||||
- feat: Make maximum amount of spans configurable (#460)
|
||||
- feat: Add a method to start a transaction (#482)
|
||||
- feat: Extend User interface by adding Data, Name and Segment (#483)
|
||||
- feat: Add ClientOptions.SendDefaultPII (#485)
|
||||
|
||||
## 0.14.0
|
||||
|
||||
- feat: Add function to continue from trace string (#434)
|
||||
- feat: Add `max-depth` options (#428)
|
||||
- *[breaking]* ref: Use a `Context` type mapping to a `map[string]interface{}` for all event contexts (#444)
|
||||
- *[breaking]* ref: Replace deprecated `ioutil` pkg with `os` & `io` (#454)
|
||||
- ref: Optimize `stacktrace.go` from size and speed (#467)
|
||||
- ci: Test against `go1.19` and `go1.18`, drop `go1.16` and `go1.15` support (#432, #477)
|
||||
- deps: Dependency update to fix CVEs (#462, #464, #477)
|
||||
|
||||
_NOTE:_ This version drops support for Go 1.16 and Go 1.15. The currently supported Go versions are the last 3 stable releases: 1.19, 1.18 and 1.17.
|
||||
|
||||
## v0.13.0
|
||||
|
||||
- ref: Change DSN ProjectID to be a string (#420)
|
||||
- fix: When extracting PCs from stack frames, try the `PC` field (#393)
|
||||
- build: Bump gin-gonic/gin from v1.4.0 to v1.7.7 (#412)
|
||||
- build: Bump Go version in go.mod (#410)
|
||||
- ci: Bump golangci-lint version in GH workflow (#419)
|
||||
- ci: Update GraphQL config with appropriate permissions (#417)
|
||||
- ci: ci: Add craft release automation (#422)
|
||||
|
||||
## v0.12.0
|
||||
|
||||
- feat: Automatic Release detection (#363, #369, #386, #400)
|
||||
- fix: Do not change Hub.lastEventID for transactions (#379)
|
||||
- fix: Do not clear LastEventID when events are dropped (#382)
|
||||
- Updates to documentation (#366, #385)
|
||||
|
||||
_NOTE:_
|
||||
This version drops support for Go 1.14, however no changes have been made that would make the SDK not work with Go 1.14. The currently supported Go versions are the last 3 stable releases: 1.15, 1.16 and 1.17.
|
||||
There are two behavior changes related to `LastEventID`, both of which were intended to align the behavior of the Sentry Go SDK with other Sentry SDKs.
|
||||
The new [automatic release detection feature](https://github.com/getsentry/sentry-go/issues/335) makes it easier to use Sentry and separate events per release without requiring extra work from users. We intend to improve this functionality in a future release by utilizing information that will be available in runtime starting with Go 1.18. The tracking issue is [#401](https://github.com/getsentry/sentry-go/issues/401).
|
||||
|
||||
## v0.11.0
|
||||
|
||||
- feat(transports): Category-based Rate Limiting ([#354](https://github.com/getsentry/sentry-go/pull/354))
|
||||
- feat(transports): Report User-Agent identifying SDK ([#357](https://github.com/getsentry/sentry-go/pull/357))
|
||||
- fix(scope): Include event processors in clone ([#349](https://github.com/getsentry/sentry-go/pull/349))
|
||||
- Improvements to `go doc` documentation ([#344](https://github.com/getsentry/sentry-go/pull/344), [#350](https://github.com/getsentry/sentry-go/pull/350), [#351](https://github.com/getsentry/sentry-go/pull/351))
|
||||
- Miscellaneous changes to our testing infrastructure with GitHub Actions
|
||||
([57123a40](https://github.com/getsentry/sentry-go/commit/57123a409be55f61b1d5a6da93c176c55a399ad0), [#128](https://github.com/getsentry/sentry-go/pull/128), [#338](https://github.com/getsentry/sentry-go/pull/338), [#345](https://github.com/getsentry/sentry-go/pull/345), [#346](https://github.com/getsentry/sentry-go/pull/346), [#352](https://github.com/getsentry/sentry-go/pull/352), [#353](https://github.com/getsentry/sentry-go/pull/353), [#355](https://github.com/getsentry/sentry-go/pull/355))
|
||||
|
||||
_NOTE:_
|
||||
This version drops support for Go 1.13. The currently supported Go versions are the last 3 stable releases: 1.14, 1.15 and 1.16.
|
||||
Users of the tracing functionality (`StartSpan`, etc) should upgrade to this version to benefit from separate rate limits for errors and transactions.
|
||||
There are no breaking changes and upgrading should be a smooth experience for all users.
|
||||
|
||||
## v0.10.0
|
||||
|
||||
- feat: Debug connection reuse (#323)
|
||||
- fix: Send root span data as `Event.Extra` (#329)
|
||||
- fix: Do not double sample transactions (#328)
|
||||
- fix: Do not override trace context of transactions (#327)
|
||||
- fix: Drain and close API response bodies (#322)
|
||||
- ci: Run tests against Go tip (#319)
|
||||
- ci: Move away from Travis in favor of GitHub Actions (#314) (#321)
|
||||
|
||||
## v0.9.0
|
||||
|
||||
- feat: Initial tracing and performance monitoring support (#285)
|
||||
- doc: Revamp sentryhttp documentation (#304)
|
||||
- fix: Hub.PopScope never empties the scope stack (#300)
|
||||
- ref: Report Event.Timestamp in local time (#299)
|
||||
- ref: Report Breadcrumb.Timestamp in local time (#299)
|
||||
|
||||
_NOTE:_
|
||||
This version introduces support for [Sentry's Performance Monitoring](https://docs.sentry.io/platforms/go/performance/).
|
||||
The new tracing capabilities are beta, and we plan to expand them on future versions. Feedback is welcome, please open new issues on GitHub.
|
||||
The `sentryhttp` package got better API docs, an [updated usage example](https://github.com/getsentry/sentry-go/tree/master/_examples/http) and support for creating automatic transactions as part of Performance Monitoring.
|
||||
|
||||
## v0.8.0
|
||||
|
||||
- build: Bump required version of Iris (#296)
|
||||
- fix: avoid unnecessary allocation in Client.processEvent (#293)
|
||||
- doc: Remove deprecation of sentryhttp.HandleFunc (#284)
|
||||
- ref: Update sentryhttp example (#283)
|
||||
- doc: Improve documentation of sentryhttp package (#282)
|
||||
- doc: Clarify SampleRate documentation (#279)
|
||||
- fix: Remove RawStacktrace (#278)
|
||||
- docs: Add example of custom HTTP transport
|
||||
- ci: Test against go1.15, drop go1.12 support (#271)
|
||||
|
||||
_NOTE:_
|
||||
This version comes with a few updates. Some examples and documentation have been
|
||||
improved. We've bumped the supported version of the Iris framework to avoid
|
||||
LGPL-licensed modules in the module dependency graph.
|
||||
The `Exception.RawStacktrace` and `Thread.RawStacktrace` fields have been
|
||||
removed to conform to Sentry's ingestion protocol, only `Exception.Stacktrace`
|
||||
and `Thread.Stacktrace` should appear in user code.
|
||||
|
||||
## v0.7.0
|
||||
|
||||
- feat: Include original error when event cannot be encoded as JSON (#258)
|
||||
- feat: Use Hub from request context when available (#217, #259)
|
||||
- feat: Extract stack frames from golang.org/x/xerrors (#262)
|
||||
- feat: Make Environment Integration preserve existing context data (#261)
|
||||
- feat: Recover and RecoverWithContext with arbitrary types (#268)
|
||||
- feat: Report bad usage of CaptureMessage and CaptureEvent (#269)
|
||||
- feat: Send debug logging to stderr by default (#266)
|
||||
- feat: Several improvements to documentation (#223, #245, #250, #265)
|
||||
- feat: Example of Recover followed by panic (#241, #247)
|
||||
- feat: Add Transactions and Spans (to support OpenTelemetry Sentry Exporter) (#235, #243, #254)
|
||||
- fix: Set either Frame.Filename or Frame.AbsPath (#233)
|
||||
- fix: Clone requestBody to new Scope (#244)
|
||||
- fix: Synchronize access and mutation of Hub.lastEventID (#264)
|
||||
- fix: Avoid repeated syscalls in prepareEvent (#256)
|
||||
- fix: Do not allocate new RNG for every event (#256)
|
||||
- fix: Remove stale replace directive in go.mod (#255)
|
||||
- fix(http): Deprecate HandleFunc, remove duplication (#260)
|
||||
|
||||
_NOTE:_
|
||||
This version comes packed with several fixes and improvements and no breaking
|
||||
changes.
|
||||
Notably, there is a change in how the SDK reports file names in stack traces
|
||||
that should resolve any ambiguity when looking at stack traces and using the
|
||||
Suspect Commits feature.
|
||||
We recommend all users to upgrade.
|
||||
|
||||
## v0.6.1
|
||||
|
||||
- fix: Use NewEvent to init Event struct (#220)
|
||||
|
||||
_NOTE:_
|
||||
A change introduced in v0.6.0 with the intent of avoiding allocations made a
|
||||
pattern used in official examples break in certain circumstances (attempting
|
||||
to write to a nil map).
|
||||
This release reverts the change such that maps in the Event struct are always
|
||||
allocated.
|
||||
|
||||
## v0.6.0
|
||||
|
||||
- feat: Read module dependencies from runtime/debug (#199)
|
||||
- feat: Support chained errors using Unwrap (#206)
|
||||
- feat: Report chain of errors when available (#185)
|
||||
- **[breaking]** fix: Accept http.RoundTripper to customize transport (#205)
|
||||
Before the SDK accepted a concrete value of type `*http.Transport` in
|
||||
`ClientOptions`, now it accepts any value implementing the `http.RoundTripper`
|
||||
interface. Note that `*http.Transport` implements `http.RoundTripper`, so most
|
||||
code bases will continue to work unchanged.
|
||||
Users of custom transport gain the ability to pass in other implementations of
|
||||
`http.RoundTripper` and may be able to simplify their code bases.
|
||||
- fix: Do not panic when scope event processor drops event (#192)
|
||||
- **[breaking]** fix: Use time.Time for timestamps (#191)
|
||||
Users of sentry-go typically do not need to manipulate timestamps manually.
|
||||
For those who do, the field type changed from `int64` to `time.Time`, which
|
||||
should be more convenient to use. The recommended way to get the current time
|
||||
is `time.Now().UTC()`.
|
||||
- fix: Report usage error including stack trace (#189)
|
||||
- feat: Add Exception.ThreadID field (#183)
|
||||
- ci: Test against Go 1.14, drop 1.11 (#170)
|
||||
- feat: Limit reading bytes from request bodies (#168)
|
||||
- **[breaking]** fix: Rename fasthttp integration package sentryhttp => sentryfasthttp
|
||||
The current recommendation is to use a named import, in which case existing
|
||||
code should not require any change:
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/getsentry/sentry-go"
|
||||
sentryfasthttp "github.com/getsentry/sentry-go/fasthttp"
|
||||
"github.com/valyala/fasthttp"
|
||||
)
|
||||
```
|
||||
|
||||
_NOTE:_
|
||||
This version includes some new features and a few breaking changes, none of
|
||||
which should pose troubles with upgrading. Most code bases should be able to
|
||||
upgrade without any changes.
|
||||
|
||||
## v0.5.1
|
||||
|
||||
- fix: Ignore err.Cause() when it is nil (#160)
|
||||
|
||||
## v0.5.0
|
||||
|
||||
- fix: Synchronize access to HTTPTransport.disabledUntil (#158)
|
||||
- docs: Update Flush documentation (#153)
|
||||
- fix: HTTPTransport.Flush panic and data race (#140)
|
||||
|
||||
_NOTE:_
|
||||
This version changes the implementation of the default transport, modifying the
|
||||
behavior of `sentry.Flush`. The previous behavior was to wait until there were
|
||||
no buffered events; new concurrent events kept `Flush` from returning. The new
|
||||
behavior is to wait until the last event prior to the call to `Flush` has been
|
||||
sent or the timeout; new concurrent events have no effect. The new behavior is
|
||||
inline with the [Unified API
|
||||
Guidelines](https://docs.sentry.io/development/sdk-dev/unified-api/).
|
||||
|
||||
We have updated the documentation and examples to clarify that `Flush` is meant
|
||||
to be called typically only once before program termination, to wait for
|
||||
in-flight events to be sent to Sentry. Calling `Flush` after every event is not
|
||||
recommended, as it introduces unnecessary latency to the surrounding function.
|
||||
Please verify the usage of `sentry.Flush` in your code base.
|
||||
|
||||
## v0.4.0
|
||||
|
||||
- fix(stacktrace): Correctly report package names (#127)
|
||||
- fix(stacktrace): Do not rely on AbsPath of files (#123)
|
||||
- build: Require github.com/ugorji/go@v1.1.7 (#110)
|
||||
- fix: Correctly store last event id (#99)
|
||||
- fix: Include request body in event payload (#94)
|
||||
- build: Reset go.mod version to 1.11 (#109)
|
||||
- fix: Eliminate data race in modules integration (#105)
|
||||
- feat: Add support for path prefixes in the DSN (#102)
|
||||
- feat: Add HTTPClient option (#86)
|
||||
- feat: Extract correct type and value from top-most error (#85)
|
||||
- feat: Check for broken pipe errors in Gin integration (#82)
|
||||
- fix: Client.CaptureMessage accept nil EventModifier (#72)
|
||||
|
||||
## v0.3.1
|
||||
|
||||
- feat: Send extra information exposed by the Go runtime (#76)
|
||||
- fix: Handle new lines in module integration (#65)
|
||||
- fix: Make sure that cache is locked when updating for contextifyFramesIntegration
|
||||
- ref: Update Iris integration and example to version 12
|
||||
- misc: Remove indirect dependencies in order to move them to separate go.mod files
|
||||
|
||||
## v0.3.0
|
||||
|
||||
- feat: Retry event marshaling without contextual data if the first pass fails
|
||||
- fix: Include `url.Parse` error in `DsnParseError`
|
||||
- fix: Make more `Scope` methods safe for concurrency
|
||||
- fix: Synchronize concurrent access to `Hub.client`
|
||||
- ref: Remove mutex from `Scope` exported API
|
||||
- ref: Remove mutex from `Hub` exported API
|
||||
- ref: Compile regexps for `filterFrames` only once
|
||||
- ref: Change `SampleRate` type to `float64`
|
||||
- doc: `Scope.Clear` not safe for concurrent use
|
||||
- ci: Test sentry-go with `go1.13`, drop `go1.10`
|
||||
|
||||
_NOTE:_
|
||||
This version removes some of the internal APIs that landed publicly (namely `Hub/Scope` mutex structs) and may require (but shouldn't) some changes to your code.
|
||||
It's not done through major version update, as we are still in `0.x` stage.
|
||||
|
||||
## v0.2.1
|
||||
|
||||
- fix: Run `Contextify` integration on `Threads` as well
|
||||
|
||||
## v0.2.0
|
||||
|
||||
- feat: Add `SetTransaction()` method on the `Scope`
|
||||
- feat: `fasthttp` framework support with `sentryfasthttp` package
|
||||
- fix: Add `RWMutex` locks to internal `Hub` and `Scope` changes
|
||||
|
||||
## v0.1.3
|
||||
|
||||
- feat: Move frames context reading into `contextifyFramesIntegration` (#28)
|
||||
|
||||
_NOTE:_
|
||||
In case of any performance issues due to source contexts IO, you can let us know and turn off the integration in the meantime with:
|
||||
|
||||
```go
|
||||
sentry.Init(sentry.ClientOptions{
|
||||
Integrations: func(integrations []sentry.Integration) []sentry.Integration {
|
||||
var filteredIntegrations []sentry.Integration
|
||||
for _, integration := range integrations {
|
||||
if integration.Name() == "ContextifyFrames" {
|
||||
continue
|
||||
}
|
||||
filteredIntegrations = append(filteredIntegrations, integration)
|
||||
}
|
||||
return filteredIntegrations
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
## v0.1.2
|
||||
|
||||
- feat: Better source code location resolution and more useful inapp frames (#26)
|
||||
- feat: Use `noopTransport` when no `Dsn` provided (#27)
|
||||
- ref: Allow empty `Dsn` instead of returning an error (#22)
|
||||
- fix: Use `NewScope` instead of literal struct inside a `scope.Clear` call (#24)
|
||||
- fix: Add to `WaitGroup` before the request is put inside a buffer (#25)
|
||||
|
||||
## v0.1.1
|
||||
|
||||
- fix: Check for initialized `Client` in `AddBreadcrumbs` (#20)
|
||||
- build: Bump version when releasing with Craft (#19)
|
||||
|
||||
## v0.1.0
|
||||
|
||||
- First stable release! \o/
|
||||
|
||||
## v0.0.1-beta.5
|
||||
|
||||
- feat: **[breaking]** Add `NewHTTPTransport` and `NewHTTPSyncTransport` which accepts all transport options
|
||||
- feat: New `HTTPSyncTransport` that blocks after each call
|
||||
- feat: New `Echo` integration
|
||||
- ref: **[breaking]** Remove `BufferSize` option from `ClientOptions` and move it to `HTTPTransport` instead
|
||||
- ref: Export default `HTTPTransport`
|
||||
- ref: Export `net/http` integration handler
|
||||
- ref: Set `Request` instantly in the package handlers, not in `recoverWithSentry` so it can be accessed later on
|
||||
- ci: Add craft config
|
||||
|
||||
## v0.0.1-beta.4
|
||||
|
||||
- feat: `IgnoreErrors` client option and corresponding integration
|
||||
- ref: Reworked `net/http` integration, wrote better example and complete readme
|
||||
- ref: Reworked `Gin` integration, wrote better example and complete readme
|
||||
- ref: Reworked `Iris` integration, wrote better example and complete readme
|
||||
- ref: Reworked `Negroni` integration, wrote better example and complete readme
|
||||
- ref: Reworked `Martini` integration, wrote better example and complete readme
|
||||
- ref: Remove `Handle()` from frameworks handlers and return it directly from New
|
||||
|
||||
## v0.0.1-beta.3
|
||||
|
||||
- feat: `Iris` framework support with `sentryiris` package
|
||||
- feat: `Gin` framework support with `sentrygin` package
|
||||
- feat: `Martini` framework support with `sentrymartini` package
|
||||
- feat: `Negroni` framework support with `sentrynegroni` package
|
||||
- feat: Add `Hub.Clone()` for easier frameworks integration
|
||||
- feat: Return `EventID` from `Recovery` methods
|
||||
- feat: Add `NewScope` and `NewEvent` functions and use them in the whole codebase
|
||||
- feat: Add `AddEventProcessor` to the `Client`
|
||||
- fix: Operate on requests body copy instead of the original
|
||||
- ref: Try to read source files from the root directory, based on the filename as well, to make it work on AWS Lambda
|
||||
- ref: Remove `gocertifi` dependence and document how to provide your own certificates
|
||||
- ref: **[breaking]** Remove `Decorate` and `DecorateFunc` methods in favor of `sentryhttp` package
|
||||
- ref: **[breaking]** Allow for integrations to live on the client, by passing client instance in `SetupOnce` method
|
||||
- ref: **[breaking]** Remove `GetIntegration` from the `Hub`
|
||||
- ref: **[breaking]** Remove `GlobalEventProcessors` getter from the public API
|
||||
|
||||
## v0.0.1-beta.2
|
||||
|
||||
- feat: Add `AttachStacktrace` client option to include stacktrace for messages
|
||||
- feat: Add `BufferSize` client option to configure transport buffer size
|
||||
- feat: Add `SetRequest` method on a `Scope` to control `Request` context data
|
||||
- feat: Add `FromHTTPRequest` for `Request` type for easier extraction
|
||||
- ref: Extract `Request` information more accurately
|
||||
- fix: Attach `ServerName`, `Release`, `Dist`, `Environment` options to the event
|
||||
- fix: Don't log events dropped due to full transport buffer as sent
|
||||
- fix: Don't panic and create an appropriate event when called `CaptureException` or `Recover` with `nil` value
|
||||
|
||||
## v0.0.1-beta
|
||||
|
||||
- Initial release
|
||||
98
vendor/github.com/getsentry/sentry-go/CONTRIBUTING.md
generated
vendored
Normal file
98
vendor/github.com/getsentry/sentry-go/CONTRIBUTING.md
generated
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
# Contributing to sentry-go
|
||||
|
||||
Hey, thank you if you're reading this, we welcome your contribution!
|
||||
|
||||
## Sending a Pull Request
|
||||
|
||||
Please help us save time when reviewing your PR by following this simple
|
||||
process:
|
||||
|
||||
1. Is your PR a simple typo fix? Read no further, **click that green "Create
|
||||
pull request" button**!
|
||||
|
||||
2. For more complex PRs that involve behavior changes or new APIs, please
|
||||
consider [opening an **issue**][new-issue] describing the problem you're
|
||||
trying to solve if there's not one already.
|
||||
|
||||
A PR is often one specific solution to a problem and sometimes talking about
|
||||
the problem unfolds new possible solutions. Remember we will be responsible
|
||||
for maintaining the changes later.
|
||||
|
||||
3. Fixing a bug and changing a behavior? Please add automated tests to prevent
|
||||
future regression.
|
||||
|
||||
4. Practice writing good commit messages. We have [commit
|
||||
guidelines][commit-guide].
|
||||
|
||||
5. We have [guidelines for PR submitters][pr-guide]. A short summary:
|
||||
|
||||
- Good PR descriptions are very helpful and most of the time they include
|
||||
**why** something is done and why done in this particular way. Also list
|
||||
other possible solutions that were considered and discarded.
|
||||
- Be your own first reviewer. Make sure your code compiles and passes the
|
||||
existing tests.
|
||||
|
||||
[new-issue]: https://github.com/getsentry/sentry-go/issues/new/choose
|
||||
[commit-guide]: https://develop.sentry.dev/code-review/#commit-guidelines
|
||||
[pr-guide]: https://develop.sentry.dev/code-review/#guidelines-for-submitters
|
||||
|
||||
Please also read through our [SDK Development docs](https://develop.sentry.dev/sdk/).
|
||||
It contains information about SDK features, expected payloads and best practices for
|
||||
contributing to Sentry SDKs.
|
||||
|
||||
## Community
|
||||
|
||||
The public-facing channels for support and development of Sentry SDKs can be found on [Discord](https://discord.gg/Ww9hbqr).
|
||||
|
||||
## Testing
|
||||
|
||||
```console
|
||||
$ go test
|
||||
```
|
||||
|
||||
### Watch mode
|
||||
|
||||
Use: https://github.com/cespare/reflex
|
||||
|
||||
```console
|
||||
$ reflex -g '*.go' -d "none" -- sh -c 'printf "\n"; go test'
|
||||
```
|
||||
|
||||
### With data race detection
|
||||
|
||||
```console
|
||||
$ go test -race
|
||||
```
|
||||
|
||||
### Coverage
|
||||
|
||||
```console
|
||||
$ go test -race -coverprofile=coverage.txt -covermode=atomic && go tool cover -html coverage.txt
|
||||
```
|
||||
|
||||
## Linting
|
||||
|
||||
Lint with [`golangci-lint`](https://github.com/golangci/golangci-lint):
|
||||
|
||||
```console
|
||||
$ golangci-lint run
|
||||
```
|
||||
|
||||
## Release
|
||||
|
||||
1. Update `CHANGELOG.md` with new version in `vX.X.X` format title and list of changes.
|
||||
|
||||
The command below can be used to get a list of changes since the last tag, with the format used in `CHANGELOG.md`:
|
||||
|
||||
```console
|
||||
$ git log --no-merges --format=%s $(git describe --abbrev=0).. | sed 's/^/- /'
|
||||
```
|
||||
|
||||
2. Commit with `misc: vX.X.X changelog` commit message and push to `master`.
|
||||
|
||||
3. Let [`craft`](https://github.com/getsentry/craft) do the rest:
|
||||
|
||||
```console
|
||||
$ craft prepare X.X.X
|
||||
$ craft publish X.X.X
|
||||
```
|
||||
21
vendor/github.com/getsentry/sentry-go/LICENSE
generated
vendored
Normal file
21
vendor/github.com/getsentry/sentry-go/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2019 Functional Software, Inc. dba Sentry
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
3
vendor/github.com/getsentry/sentry-go/MIGRATION.md
generated
vendored
Normal file
3
vendor/github.com/getsentry/sentry-go/MIGRATION.md
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# `raven-go` to `sentry-go` Migration Guide
|
||||
|
||||
A [`raven-go` to `sentry-go` migration guide](https://docs.sentry.io/platforms/go/migration/) is available at the official Sentry documentation site.
|
||||
82
vendor/github.com/getsentry/sentry-go/Makefile
generated
vendored
Normal file
82
vendor/github.com/getsentry/sentry-go/Makefile
generated
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
.DEFAULT_GOAL := help
|
||||
|
||||
MKFILE_PATH := $(abspath $(lastword $(MAKEFILE_LIST)))
|
||||
MKFILE_DIR := $(dir $(MKFILE_PATH))
|
||||
ALL_GO_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | sort)
|
||||
GO = go
|
||||
TIMEOUT = 300
|
||||
|
||||
# Parse Makefile and display the help
|
||||
help: ## Show help
|
||||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
||||
.PHONY: help
|
||||
|
||||
build: ## Build everything
|
||||
for dir in $(ALL_GO_MOD_DIRS); do \
|
||||
cd "$${dir}"; \
|
||||
echo ">>> Running 'go build' for module: $${dir}"; \
|
||||
go build ./...; \
|
||||
done;
|
||||
.PHONY: build
|
||||
|
||||
### Tests (inspired by https://github.com/open-telemetry/opentelemetry-go/blob/main/Makefile)
|
||||
TEST_TARGETS := test-short test-verbose test-race
|
||||
test-race: ARGS=-race
|
||||
test-short: ARGS=-short
|
||||
test-verbose: ARGS=-v -race
|
||||
$(TEST_TARGETS): test
|
||||
test: $(ALL_GO_MOD_DIRS:%=test/%) ## Run tests
|
||||
test/%: DIR=$*
|
||||
test/%:
|
||||
@echo ">>> Running tests for module: $(DIR)"
|
||||
@# We use '-count=1' to disable test caching.
|
||||
(cd $(DIR) && $(GO) test -count=1 -timeout $(TIMEOUT)s $(ARGS) ./...)
|
||||
.PHONY: $(TEST_TARGETS) test
|
||||
|
||||
# Coverage
|
||||
COVERAGE_MODE = atomic
|
||||
COVERAGE_PROFILE = coverage.out
|
||||
COVERAGE_REPORT_DIR = .coverage
|
||||
COVERAGE_REPORT_DIR_ABS = "$(MKFILE_DIR)/$(COVERAGE_REPORT_DIR)"
|
||||
$(COVERAGE_REPORT_DIR):
|
||||
mkdir -p $(COVERAGE_REPORT_DIR)
|
||||
clean-report-dir: $(COVERAGE_REPORT_DIR)
|
||||
test $(COVERAGE_REPORT_DIR) && rm -f $(COVERAGE_REPORT_DIR)/*
|
||||
test-coverage: $(COVERAGE_REPORT_DIR) clean-report-dir ## Test with coverage enabled
|
||||
set -e ; \
|
||||
for dir in $(ALL_GO_MOD_DIRS); do \
|
||||
echo ">>> Running tests with coverage for module: $${dir}"; \
|
||||
DIR_ABS=$$(python -c 'import os, sys; print(os.path.realpath(sys.argv[1]))' $${dir}) ; \
|
||||
REPORT_NAME=$$(basename $${DIR_ABS}); \
|
||||
(cd "$${dir}" && \
|
||||
$(GO) test -count=1 -timeout $(TIMEOUT)s -coverpkg=./... -covermode=$(COVERAGE_MODE) -coverprofile="$(COVERAGE_PROFILE)" ./... && \
|
||||
cp $(COVERAGE_PROFILE) "$(COVERAGE_REPORT_DIR_ABS)/$${REPORT_NAME}_$(COVERAGE_PROFILE)" && \
|
||||
$(GO) tool cover -html=$(COVERAGE_PROFILE) -o coverage.html); \
|
||||
done;
|
||||
.PHONY: test-coverage clean-report-dir
|
||||
|
||||
mod-tidy: ## Check go.mod tidiness
|
||||
set -e ; \
|
||||
for dir in $(ALL_GO_MOD_DIRS); do \
|
||||
echo ">>> Running 'go mod tidy' for module: $${dir}"; \
|
||||
(cd "$${dir}" && go mod tidy -go=1.21 -compat=1.21); \
|
||||
done; \
|
||||
git diff --exit-code;
|
||||
.PHONY: mod-tidy
|
||||
|
||||
vet: ## Run "go vet"
|
||||
set -e ; \
|
||||
for dir in $(ALL_GO_MOD_DIRS); do \
|
||||
echo ">>> Running 'go vet' for module: $${dir}"; \
|
||||
(cd "$${dir}" && go vet ./...); \
|
||||
done;
|
||||
.PHONY: vet
|
||||
|
||||
|
||||
lint: ## Lint (using "golangci-lint")
|
||||
golangci-lint run
|
||||
.PHONY: lint
|
||||
|
||||
fmt: ## Format all Go files
|
||||
gofmt -l -w -s .
|
||||
.PHONY: fmt
|
||||
106
vendor/github.com/getsentry/sentry-go/README.md
generated
vendored
Normal file
106
vendor/github.com/getsentry/sentry-go/README.md
generated
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
<p align="center">
|
||||
<a href="https://sentry.io/?utm_source=github&utm_medium=logo" target="_blank">
|
||||
<picture>
|
||||
<source srcset="https://sentry-brand.storage.googleapis.com/sentry-logo-white.png" media="(prefers-color-scheme: dark)" />
|
||||
<source srcset="https://sentry-brand.storage.googleapis.com/sentry-logo-black.png" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" />
|
||||
<img src="https://sentry-brand.storage.googleapis.com/sentry-logo-black.png" alt="Sentry" width="280">
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
|
||||
# Official Sentry SDK for Go
|
||||
|
||||
[](https://github.com/getsentry/sentry-go/actions/workflows/test.yml)
|
||||
[](https://goreportcard.com/report/github.com/getsentry/sentry-go)
|
||||
[](https://discord.gg/Ww9hbqr)
|
||||
[](https://pkg.go.dev/github.com/getsentry/sentry-go)
|
||||
|
||||
`sentry-go` provides a Sentry client implementation for the Go programming
|
||||
language. This is the next generation of the Go SDK for [Sentry](https://sentry.io/),
|
||||
intended to replace the `raven-go` package.
|
||||
|
||||
> Looking for the old `raven-go` SDK documentation? See the Legacy client section [here](https://docs.sentry.io/clients/go/).
|
||||
> If you want to start using `sentry-go` instead, check out the [migration guide](https://docs.sentry.io/platforms/go/migration/).
|
||||
|
||||
## Requirements
|
||||
|
||||
The only requirement is a Go compiler.
|
||||
|
||||
We verify this package against the 3 most recent releases of Go. Those are the
|
||||
supported versions. The exact versions are defined in
|
||||
[`GitHub workflow`](.github/workflows/test.yml).
|
||||
|
||||
In addition, we run tests against the current master branch of the Go toolchain,
|
||||
though support for this configuration is best-effort.
|
||||
|
||||
## Installation
|
||||
|
||||
`sentry-go` can be installed like any other Go library through `go get`:
|
||||
|
||||
```console
|
||||
$ go get github.com/getsentry/sentry-go@latest
|
||||
```
|
||||
|
||||
Check out the [list of released versions](https://github.com/getsentry/sentry-go/releases).
|
||||
|
||||
## Configuration
|
||||
|
||||
To use `sentry-go`, you’ll need to import the `sentry-go` package and initialize
|
||||
it with your DSN and other [options](https://pkg.go.dev/github.com/getsentry/sentry-go#ClientOptions).
|
||||
|
||||
If not specified in the SDK initialization, the
|
||||
[DSN](https://docs.sentry.io/product/sentry-basics/dsn-explainer/),
|
||||
[Release](https://docs.sentry.io/product/releases/) and
|
||||
[Environment](https://docs.sentry.io/product/sentry-basics/environments/)
|
||||
are read from the environment variables `SENTRY_DSN`, `SENTRY_RELEASE` and
|
||||
`SENTRY_ENVIRONMENT`, respectively.
|
||||
|
||||
More on this in the [Configuration section of the official Sentry Go SDK documentation](https://docs.sentry.io/platforms/go/configuration/).
|
||||
|
||||
## Usage
|
||||
|
||||
The SDK supports reporting errors and tracking application performance.
|
||||
|
||||
To get started, have a look at one of our [examples](_examples/):
|
||||
- [Basic error instrumentation](_examples/basic/main.go)
|
||||
- [Error and tracing for HTTP servers](_examples/http/main.go)
|
||||
|
||||
We also provide a [complete API reference](https://pkg.go.dev/github.com/getsentry/sentry-go).
|
||||
|
||||
For more detailed information about how to get the most out of `sentry-go`,
|
||||
check out the official documentation:
|
||||
|
||||
- [Sentry Go SDK documentation](https://docs.sentry.io/platforms/go/)
|
||||
- Guides:
|
||||
- [net/http](https://docs.sentry.io/platforms/go/guides/http/)
|
||||
- [echo](https://docs.sentry.io/platforms/go/guides/echo/)
|
||||
- [fasthttp](https://docs.sentry.io/platforms/go/guides/fasthttp/)
|
||||
- [fiber](https://docs.sentry.io/platforms/go/guides/fiber/)
|
||||
- [gin](https://docs.sentry.io/platforms/go/guides/gin/)
|
||||
- [iris](https://docs.sentry.io/platforms/go/guides/iris/)
|
||||
- [logrus](https://docs.sentry.io/platforms/go/guides/logrus/)
|
||||
- [negroni](https://docs.sentry.io/platforms/go/guides/negroni/)
|
||||
- [slog](https://docs.sentry.io/platforms/go/guides/slog/)
|
||||
- [zerolog](https://docs.sentry.io/platforms/go/guides/zerolog/)
|
||||
|
||||
## Resources
|
||||
|
||||
- [Bug Tracker](https://github.com/getsentry/sentry-go/issues)
|
||||
- [GitHub Project](https://github.com/getsentry/sentry-go)
|
||||
- [](https://pkg.go.dev/github.com/getsentry/sentry-go)
|
||||
- [](https://docs.sentry.io/platforms/go/)
|
||||
- [](https://github.com/getsentry/sentry-go/discussions)
|
||||
- [](https://discord.gg/Ww9hbqr)
|
||||
- [](http://stackoverflow.com/questions/tagged/sentry)
|
||||
- [](https://twitter.com/intent/follow?screen_name=getsentry)
|
||||
|
||||
## License
|
||||
|
||||
Licensed under
|
||||
[The MIT License](https://opensource.org/licenses/mit/), see
|
||||
[`LICENSE`](LICENSE).
|
||||
|
||||
## Community
|
||||
|
||||
Join Sentry's [`#go` channel on Discord](https://discord.gg/Ww9hbqr) to get
|
||||
involved and help us improve the SDK!
|
||||
121
vendor/github.com/getsentry/sentry-go/check_in.go
generated
vendored
Normal file
121
vendor/github.com/getsentry/sentry-go/check_in.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package sentry
|
||||
|
||||
import "time"
|
||||
|
||||
type CheckInStatus string
|
||||
|
||||
const (
|
||||
CheckInStatusInProgress CheckInStatus = "in_progress"
|
||||
CheckInStatusOK CheckInStatus = "ok"
|
||||
CheckInStatusError CheckInStatus = "error"
|
||||
)
|
||||
|
||||
type checkInScheduleType string
|
||||
|
||||
const (
|
||||
checkInScheduleTypeCrontab checkInScheduleType = "crontab"
|
||||
checkInScheduleTypeInterval checkInScheduleType = "interval"
|
||||
)
|
||||
|
||||
type MonitorSchedule interface {
|
||||
// scheduleType is a private method that must be implemented for monitor schedule
|
||||
// implementation. It should never be called. This method is made for having
|
||||
// specific private implementation of MonitorSchedule interface.
|
||||
scheduleType() checkInScheduleType
|
||||
}
|
||||
|
||||
type crontabSchedule struct {
|
||||
Type string `json:"type"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
func (c crontabSchedule) scheduleType() checkInScheduleType {
|
||||
return checkInScheduleTypeCrontab
|
||||
}
|
||||
|
||||
// CrontabSchedule defines the MonitorSchedule with a cron format.
|
||||
// Example: "8 * * * *".
|
||||
func CrontabSchedule(scheduleString string) MonitorSchedule {
|
||||
return crontabSchedule{
|
||||
Type: string(checkInScheduleTypeCrontab),
|
||||
Value: scheduleString,
|
||||
}
|
||||
}
|
||||
|
||||
type intervalSchedule struct {
|
||||
Type string `json:"type"`
|
||||
Value int64 `json:"value"`
|
||||
Unit string `json:"unit"`
|
||||
}
|
||||
|
||||
func (i intervalSchedule) scheduleType() checkInScheduleType {
|
||||
return checkInScheduleTypeInterval
|
||||
}
|
||||
|
||||
type MonitorScheduleUnit string
|
||||
|
||||
const (
|
||||
MonitorScheduleUnitMinute MonitorScheduleUnit = "minute"
|
||||
MonitorScheduleUnitHour MonitorScheduleUnit = "hour"
|
||||
MonitorScheduleUnitDay MonitorScheduleUnit = "day"
|
||||
MonitorScheduleUnitWeek MonitorScheduleUnit = "week"
|
||||
MonitorScheduleUnitMonth MonitorScheduleUnit = "month"
|
||||
MonitorScheduleUnitYear MonitorScheduleUnit = "year"
|
||||
)
|
||||
|
||||
// IntervalSchedule defines the MonitorSchedule with an interval format.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// IntervalSchedule(1, sentry.MonitorScheduleUnitDay)
|
||||
func IntervalSchedule(value int64, unit MonitorScheduleUnit) MonitorSchedule {
|
||||
return intervalSchedule{
|
||||
Type: string(checkInScheduleTypeInterval),
|
||||
Value: value,
|
||||
Unit: string(unit),
|
||||
}
|
||||
}
|
||||
|
||||
type MonitorConfig struct { //nolint: maligned // prefer readability over optimal memory layout
|
||||
Schedule MonitorSchedule `json:"schedule,omitempty"`
|
||||
// The allowed margin of minutes after the expected check-in time that
|
||||
// the monitor will not be considered missed for.
|
||||
CheckInMargin int64 `json:"checkin_margin,omitempty"`
|
||||
// The allowed duration in minutes that the monitor may be `in_progress`
|
||||
// for before being considered failed due to timeout.
|
||||
MaxRuntime int64 `json:"max_runtime,omitempty"`
|
||||
// A tz database string representing the timezone which the monitor's execution schedule is in.
|
||||
// See: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
Timezone string `json:"timezone,omitempty"`
|
||||
// The number of consecutive failed check-ins it takes before an issue is created.
|
||||
FailureIssueThreshold int64 `json:"failure_issue_threshold,omitempty"`
|
||||
// The number of consecutive OK check-ins it takes before an issue is resolved.
|
||||
RecoveryThreshold int64 `json:"recovery_threshold,omitempty"`
|
||||
}
|
||||
|
||||
type CheckIn struct { //nolint: maligned // prefer readability over optimal memory layout
|
||||
// Check-In ID (unique and client generated)
|
||||
ID EventID `json:"check_in_id"`
|
||||
// The distinct slug of the monitor.
|
||||
MonitorSlug string `json:"monitor_slug"`
|
||||
// The status of the check-in.
|
||||
Status CheckInStatus `json:"status"`
|
||||
// The duration of the check-in. Will only take effect if the status is ok or error.
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
}
|
||||
|
||||
// serializedCheckIn is used by checkInMarshalJSON method on Event struct.
|
||||
// See https://develop.sentry.dev/sdk/check-ins/
|
||||
type serializedCheckIn struct { //nolint: maligned
|
||||
// Check-In ID (unique and client generated).
|
||||
CheckInID string `json:"check_in_id"`
|
||||
// The distinct slug of the monitor.
|
||||
MonitorSlug string `json:"monitor_slug"`
|
||||
// The status of the check-in.
|
||||
Status CheckInStatus `json:"status"`
|
||||
// The duration of the check-in in seconds. Will only take effect if the status is ok or error.
|
||||
Duration float64 `json:"duration,omitempty"`
|
||||
Release string `json:"release,omitempty"`
|
||||
Environment string `json:"environment,omitempty"`
|
||||
MonitorConfig *MonitorConfig `json:"monitor_config,omitempty"`
|
||||
}
|
||||
733
vendor/github.com/getsentry/sentry-go/client.go
generated
vendored
Normal file
733
vendor/github.com/getsentry/sentry-go/client.go
generated
vendored
Normal file
@@ -0,0 +1,733 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/getsentry/sentry-go/internal/debug"
|
||||
)
|
||||
|
||||
// The identifier of the SDK.
|
||||
const sdkIdentifier = "sentry.go"
|
||||
|
||||
// maxErrorDepth is the maximum number of errors reported in a chain of errors.
|
||||
// This protects the SDK from an arbitrarily long chain of wrapped errors.
|
||||
//
|
||||
// An additional consideration is that arguably reporting a long chain of errors
|
||||
// is of little use when debugging production errors with Sentry. The Sentry UI
|
||||
// is not optimized for long chains either. The top-level error together with a
|
||||
// stack trace is often the most useful information.
|
||||
const maxErrorDepth = 10
|
||||
|
||||
// defaultMaxSpans limits the default number of recorded spans per transaction. The limit is
|
||||
// meant to bound memory usage and prevent too large transaction events that
|
||||
// would be rejected by Sentry.
|
||||
const defaultMaxSpans = 1000
|
||||
|
||||
// hostname is the host name reported by the kernel. It is precomputed once to
|
||||
// avoid syscalls when capturing events.
|
||||
//
|
||||
// The error is ignored because retrieving the host name is best-effort. If the
|
||||
// error is non-nil, there is nothing to do other than retrying. We choose not
|
||||
// to retry for now.
|
||||
var hostname, _ = os.Hostname()
|
||||
|
||||
// lockedRand is a random number generator safe for concurrent use. Its API is
|
||||
// intentionally limited and it is not meant as a full replacement for a
|
||||
// rand.Rand.
|
||||
type lockedRand struct {
|
||||
mu sync.Mutex
|
||||
r *rand.Rand
|
||||
}
|
||||
|
||||
// Float64 returns a pseudo-random number in [0.0,1.0).
|
||||
func (r *lockedRand) Float64() float64 {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
return r.r.Float64()
|
||||
}
|
||||
|
||||
// rng is the internal random number generator.
|
||||
//
|
||||
// We do not use the global functions from math/rand because, while they are
|
||||
// safe for concurrent use, any package in a build could change the seed and
|
||||
// affect the generated numbers, for instance making them deterministic. On the
|
||||
// other hand, the source returned from rand.NewSource is not safe for
|
||||
// concurrent use, so we need to couple its use with a sync.Mutex.
|
||||
var rng = &lockedRand{
|
||||
// #nosec G404 -- We are fine using transparent, non-secure value here.
|
||||
r: rand.New(rand.NewSource(time.Now().UnixNano())),
|
||||
}
|
||||
|
||||
// usageError is used to report to Sentry an SDK usage error.
|
||||
//
|
||||
// It is not exported because it is never returned by any function or method in
|
||||
// the exported API.
|
||||
type usageError struct {
|
||||
error
|
||||
}
|
||||
|
||||
// Logger is an instance of log.Logger that is use to provide debug information about running Sentry Client
|
||||
// can be enabled by either using Logger.SetOutput directly or with Debug client option.
|
||||
var Logger = log.New(io.Discard, "[Sentry] ", log.LstdFlags)
|
||||
|
||||
// EventProcessor is a function that processes an event.
|
||||
// Event processors are used to change an event before it is sent to Sentry.
|
||||
type EventProcessor func(event *Event, hint *EventHint) *Event
|
||||
|
||||
// EventModifier is the interface that wraps the ApplyToEvent method.
|
||||
//
|
||||
// ApplyToEvent changes an event based on external data and/or
|
||||
// an event hint.
|
||||
type EventModifier interface {
|
||||
ApplyToEvent(event *Event, hint *EventHint, client *Client) *Event
|
||||
}
|
||||
|
||||
var globalEventProcessors []EventProcessor
|
||||
|
||||
// AddGlobalEventProcessor adds processor to the global list of event
|
||||
// processors. Global event processors apply to all events.
|
||||
//
|
||||
// AddGlobalEventProcessor is deprecated. Most users will prefer to initialize
|
||||
// the SDK with Init and provide a ClientOptions.BeforeSend function or use
|
||||
// Scope.AddEventProcessor instead.
|
||||
func AddGlobalEventProcessor(processor EventProcessor) {
|
||||
globalEventProcessors = append(globalEventProcessors, processor)
|
||||
}
|
||||
|
||||
// Integration allows for registering a functions that modify or discard captured events.
|
||||
type Integration interface {
|
||||
Name() string
|
||||
SetupOnce(client *Client)
|
||||
}
|
||||
|
||||
// ClientOptions that configures a SDK Client.
|
||||
type ClientOptions struct {
|
||||
// The DSN to use. If the DSN is not set, the client is effectively
|
||||
// disabled.
|
||||
Dsn string
|
||||
// In debug mode, the debug information is printed to stdout to help you
|
||||
// understand what sentry is doing.
|
||||
Debug bool
|
||||
// Configures whether SDK should generate and attach stacktraces to pure
|
||||
// capture message calls.
|
||||
AttachStacktrace bool
|
||||
// The sample rate for event submission in the range [0.0, 1.0]. By default,
|
||||
// all events are sent. Thus, as a historical special case, the sample rate
|
||||
// 0.0 is treated as if it was 1.0. To drop all events, set the DSN to the
|
||||
// empty string.
|
||||
SampleRate float64
|
||||
// Enable performance tracing.
|
||||
EnableTracing bool
|
||||
// The sample rate for sampling traces in the range [0.0, 1.0].
|
||||
TracesSampleRate float64
|
||||
// Used to customize the sampling of traces, overrides TracesSampleRate.
|
||||
TracesSampler TracesSampler
|
||||
// List of regexp strings that will be used to match against event's message
|
||||
// and if applicable, caught errors type and value.
|
||||
// If the match is found, then a whole event will be dropped.
|
||||
IgnoreErrors []string
|
||||
// List of regexp strings that will be used to match against a transaction's
|
||||
// name. If a match is found, then the transaction will be dropped.
|
||||
IgnoreTransactions []string
|
||||
// If this flag is enabled, certain personally identifiable information (PII) is added by active integrations.
|
||||
// By default, no such data is sent.
|
||||
SendDefaultPII bool
|
||||
// BeforeSend is called before error events are sent to Sentry.
|
||||
// Use it to mutate the event or return nil to discard the event.
|
||||
BeforeSend func(event *Event, hint *EventHint) *Event
|
||||
// BeforeSendTransaction is called before transaction events are sent to Sentry.
|
||||
// Use it to mutate the transaction or return nil to discard the transaction.
|
||||
BeforeSendTransaction func(event *Event, hint *EventHint) *Event
|
||||
// Before breadcrumb add callback.
|
||||
BeforeBreadcrumb func(breadcrumb *Breadcrumb, hint *BreadcrumbHint) *Breadcrumb
|
||||
// Integrations to be installed on the current Client, receives default
|
||||
// integrations.
|
||||
Integrations func([]Integration) []Integration
|
||||
// io.Writer implementation that should be used with the Debug mode.
|
||||
DebugWriter io.Writer
|
||||
// The transport to use. Defaults to HTTPTransport.
|
||||
Transport Transport
|
||||
// The server name to be reported.
|
||||
ServerName string
|
||||
// The release to be sent with events.
|
||||
//
|
||||
// Some Sentry features are built around releases, and, thus, reporting
|
||||
// events with a non-empty release improves the product experience. See
|
||||
// https://docs.sentry.io/product/releases/.
|
||||
//
|
||||
// If Release is not set, the SDK will try to derive a default value
|
||||
// from environment variables or the Git repository in the working
|
||||
// directory.
|
||||
//
|
||||
// If you distribute a compiled binary, it is recommended to set the
|
||||
// Release value explicitly at build time. As an example, you can use:
|
||||
//
|
||||
// go build -ldflags='-X main.release=VALUE'
|
||||
//
|
||||
// That will set the value of a predeclared variable 'release' in the
|
||||
// 'main' package to 'VALUE'. Then, use that variable when initializing
|
||||
// the SDK:
|
||||
//
|
||||
// sentry.Init(ClientOptions{Release: release})
|
||||
//
|
||||
// See https://golang.org/cmd/go/ and https://golang.org/cmd/link/ for
|
||||
// the official documentation of -ldflags and -X, respectively.
|
||||
Release string
|
||||
// The dist to be sent with events.
|
||||
Dist string
|
||||
// The environment to be sent with events.
|
||||
Environment string
|
||||
// Maximum number of breadcrumbs
|
||||
// when MaxBreadcrumbs is negative then ignore breadcrumbs.
|
||||
MaxBreadcrumbs int
|
||||
// Maximum number of spans.
|
||||
//
|
||||
// See https://develop.sentry.dev/sdk/envelopes/#size-limits for size limits
|
||||
// applied during event ingestion. Events that exceed these limits might get dropped.
|
||||
MaxSpans int
|
||||
// An optional pointer to http.Client that will be used with a default
|
||||
// HTTPTransport. Using your own client will make HTTPTransport, HTTPProxy,
|
||||
// HTTPSProxy and CaCerts options ignored.
|
||||
HTTPClient *http.Client
|
||||
// An optional pointer to http.Transport that will be used with a default
|
||||
// HTTPTransport. Using your own transport will make HTTPProxy, HTTPSProxy
|
||||
// and CaCerts options ignored.
|
||||
HTTPTransport http.RoundTripper
|
||||
// An optional HTTP proxy to use.
|
||||
// This will default to the HTTP_PROXY environment variable.
|
||||
HTTPProxy string
|
||||
// An optional HTTPS proxy to use.
|
||||
// This will default to the HTTPS_PROXY environment variable.
|
||||
// HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
|
||||
HTTPSProxy string
|
||||
// An optional set of SSL certificates to use.
|
||||
CaCerts *x509.CertPool
|
||||
// MaxErrorDepth is the maximum number of errors reported in a chain of errors.
|
||||
// This protects the SDK from an arbitrarily long chain of wrapped errors.
|
||||
//
|
||||
// An additional consideration is that arguably reporting a long chain of errors
|
||||
// is of little use when debugging production errors with Sentry. The Sentry UI
|
||||
// is not optimized for long chains either. The top-level error together with a
|
||||
// stack trace is often the most useful information.
|
||||
MaxErrorDepth int
|
||||
// Default event tags. These are overridden by tags set on a scope.
|
||||
Tags map[string]string
|
||||
}
|
||||
|
||||
// Client is the underlying processor that is used by the main API and Hub
|
||||
// instances. It must be created with NewClient.
|
||||
type Client struct {
|
||||
mu sync.RWMutex
|
||||
options ClientOptions
|
||||
dsn *Dsn
|
||||
eventProcessors []EventProcessor
|
||||
integrations []Integration
|
||||
sdkIdentifier string
|
||||
sdkVersion string
|
||||
// Transport is read-only. Replacing the transport of an existing client is
|
||||
// not supported, create a new client instead.
|
||||
Transport Transport
|
||||
}
|
||||
|
||||
// NewClient creates and returns an instance of Client configured using
|
||||
// ClientOptions.
|
||||
//
|
||||
// Most users will not create clients directly. Instead, initialize the SDK with
|
||||
// Init and use the package-level functions (for simple programs that run on a
|
||||
// single goroutine) or hub methods (for concurrent programs, for example web
|
||||
// servers).
|
||||
func NewClient(options ClientOptions) (*Client, error) {
|
||||
// The default error event sample rate for all SDKs is 1.0 (send all).
|
||||
//
|
||||
// In Go, the zero value (default) for float64 is 0.0, which means that
|
||||
// constructing a client with NewClient(ClientOptions{}), or, equivalently,
|
||||
// initializing the SDK with Init(ClientOptions{}) without an explicit
|
||||
// SampleRate would drop all events.
|
||||
//
|
||||
// To retain the desired default behavior, we exceptionally flip SampleRate
|
||||
// from 0.0 to 1.0 here. Setting the sample rate to 0.0 is not very useful
|
||||
// anyway, and the same end result can be achieved in many other ways like
|
||||
// not initializing the SDK, setting the DSN to the empty string or using an
|
||||
// event processor that always returns nil.
|
||||
//
|
||||
// An alternative API could be such that default options don't need to be
|
||||
// the same as Go's zero values, for example using the Functional Options
|
||||
// pattern. That would either require a breaking change if we want to reuse
|
||||
// the obvious NewClient name, or a new function as an alternative
|
||||
// constructor.
|
||||
if options.SampleRate == 0.0 {
|
||||
options.SampleRate = 1.0
|
||||
}
|
||||
|
||||
if options.Debug {
|
||||
debugWriter := options.DebugWriter
|
||||
if debugWriter == nil {
|
||||
debugWriter = os.Stderr
|
||||
}
|
||||
Logger.SetOutput(debugWriter)
|
||||
}
|
||||
|
||||
if options.Dsn == "" {
|
||||
options.Dsn = os.Getenv("SENTRY_DSN")
|
||||
}
|
||||
|
||||
if options.Release == "" {
|
||||
options.Release = defaultRelease()
|
||||
}
|
||||
|
||||
if options.Environment == "" {
|
||||
options.Environment = os.Getenv("SENTRY_ENVIRONMENT")
|
||||
}
|
||||
|
||||
if options.MaxErrorDepth == 0 {
|
||||
options.MaxErrorDepth = maxErrorDepth
|
||||
}
|
||||
|
||||
if options.MaxSpans == 0 {
|
||||
options.MaxSpans = defaultMaxSpans
|
||||
}
|
||||
|
||||
// SENTRYGODEBUG is a comma-separated list of key=value pairs (similar
|
||||
// to GODEBUG). It is not a supported feature: recognized debug options
|
||||
// may change any time.
|
||||
//
|
||||
// The intended public is SDK developers. It is orthogonal to
|
||||
// options.Debug, which is also available for SDK users.
|
||||
dbg := strings.Split(os.Getenv("SENTRYGODEBUG"), ",")
|
||||
sort.Strings(dbg)
|
||||
// dbgOpt returns true when the given debug option is enabled, for
|
||||
// example SENTRYGODEBUG=someopt=1.
|
||||
dbgOpt := func(opt string) bool {
|
||||
s := opt + "=1"
|
||||
return dbg[sort.SearchStrings(dbg, s)%len(dbg)] == s
|
||||
}
|
||||
if dbgOpt("httpdump") || dbgOpt("httptrace") {
|
||||
options.HTTPTransport = &debug.Transport{
|
||||
RoundTripper: http.DefaultTransport,
|
||||
Output: os.Stderr,
|
||||
Dump: dbgOpt("httpdump"),
|
||||
Trace: dbgOpt("httptrace"),
|
||||
}
|
||||
}
|
||||
|
||||
var dsn *Dsn
|
||||
if options.Dsn != "" {
|
||||
var err error
|
||||
dsn, err = NewDsn(options.Dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
client := Client{
|
||||
options: options,
|
||||
dsn: dsn,
|
||||
sdkIdentifier: sdkIdentifier,
|
||||
sdkVersion: SDKVersion,
|
||||
}
|
||||
|
||||
client.setupTransport()
|
||||
client.setupIntegrations()
|
||||
|
||||
return &client, nil
|
||||
}
|
||||
|
||||
func (client *Client) setupTransport() {
|
||||
opts := client.options
|
||||
transport := opts.Transport
|
||||
|
||||
if transport == nil {
|
||||
if opts.Dsn == "" {
|
||||
transport = new(noopTransport)
|
||||
} else {
|
||||
httpTransport := NewHTTPTransport()
|
||||
// When tracing is enabled, use larger buffer to
|
||||
// accommodate more concurrent events.
|
||||
// TODO(tracing): consider using separate buffers per
|
||||
// event type.
|
||||
if opts.EnableTracing {
|
||||
httpTransport.BufferSize = 1000
|
||||
}
|
||||
transport = httpTransport
|
||||
}
|
||||
}
|
||||
|
||||
transport.Configure(opts)
|
||||
client.Transport = transport
|
||||
}
|
||||
|
||||
func (client *Client) setupIntegrations() {
|
||||
integrations := []Integration{
|
||||
new(contextifyFramesIntegration),
|
||||
new(environmentIntegration),
|
||||
new(modulesIntegration),
|
||||
new(ignoreErrorsIntegration),
|
||||
new(ignoreTransactionsIntegration),
|
||||
new(globalTagsIntegration),
|
||||
}
|
||||
|
||||
if client.options.Integrations != nil {
|
||||
integrations = client.options.Integrations(integrations)
|
||||
}
|
||||
|
||||
for _, integration := range integrations {
|
||||
if client.integrationAlreadyInstalled(integration.Name()) {
|
||||
Logger.Printf("Integration %s is already installed\n", integration.Name())
|
||||
continue
|
||||
}
|
||||
client.integrations = append(client.integrations, integration)
|
||||
integration.SetupOnce(client)
|
||||
Logger.Printf("Integration installed: %s\n", integration.Name())
|
||||
}
|
||||
|
||||
sort.Slice(client.integrations, func(i, j int) bool {
|
||||
return client.integrations[i].Name() < client.integrations[j].Name()
|
||||
})
|
||||
}
|
||||
|
||||
// AddEventProcessor adds an event processor to the client. It must not be
|
||||
// called from concurrent goroutines. Most users will prefer to use
|
||||
// ClientOptions.BeforeSend or Scope.AddEventProcessor instead.
|
||||
//
|
||||
// Note that typical programs have only a single client created by Init and the
|
||||
// client is shared among multiple hubs, one per goroutine, such that adding an
|
||||
// event processor to the client affects all hubs that share the client.
|
||||
func (client *Client) AddEventProcessor(processor EventProcessor) {
|
||||
client.eventProcessors = append(client.eventProcessors, processor)
|
||||
}
|
||||
|
||||
// Options return ClientOptions for the current Client.
|
||||
func (client *Client) Options() ClientOptions {
|
||||
// Note: internally, consider using `client.options` instead of `client.Options()` to avoid copying the object each time.
|
||||
return client.options
|
||||
}
|
||||
|
||||
// CaptureMessage captures an arbitrary message.
|
||||
func (client *Client) CaptureMessage(message string, hint *EventHint, scope EventModifier) *EventID {
|
||||
event := client.EventFromMessage(message, LevelInfo)
|
||||
return client.CaptureEvent(event, hint, scope)
|
||||
}
|
||||
|
||||
// CaptureException captures an error.
|
||||
func (client *Client) CaptureException(exception error, hint *EventHint, scope EventModifier) *EventID {
|
||||
event := client.EventFromException(exception, LevelError)
|
||||
return client.CaptureEvent(event, hint, scope)
|
||||
}
|
||||
|
||||
// CaptureCheckIn captures a check in.
|
||||
func (client *Client) CaptureCheckIn(checkIn *CheckIn, monitorConfig *MonitorConfig, scope EventModifier) *EventID {
|
||||
event := client.EventFromCheckIn(checkIn, monitorConfig)
|
||||
if event != nil && event.CheckIn != nil {
|
||||
client.CaptureEvent(event, nil, scope)
|
||||
return &event.CheckIn.ID
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CaptureEvent captures an event on the currently active client if any.
|
||||
//
|
||||
// The event must already be assembled. Typically code would instead use
|
||||
// the utility methods like CaptureException. The return value is the
|
||||
// event ID. In case Sentry is disabled or event was dropped, the return value will be nil.
|
||||
func (client *Client) CaptureEvent(event *Event, hint *EventHint, scope EventModifier) *EventID {
|
||||
return client.processEvent(event, hint, scope)
|
||||
}
|
||||
|
||||
// Recover captures a panic.
|
||||
// Returns EventID if successfully, or nil if there's no error to recover from.
|
||||
func (client *Client) Recover(err interface{}, hint *EventHint, scope EventModifier) *EventID {
|
||||
if err == nil {
|
||||
err = recover()
|
||||
}
|
||||
|
||||
// Normally we would not pass a nil Context, but RecoverWithContext doesn't
|
||||
// use the Context for communicating deadline nor cancelation. All it does
|
||||
// is store the Context in the EventHint and there nil means the Context is
|
||||
// not available.
|
||||
// nolint: staticcheck
|
||||
return client.RecoverWithContext(nil, err, hint, scope)
|
||||
}
|
||||
|
||||
// RecoverWithContext captures a panic and passes relevant context object.
|
||||
// Returns EventID if successfully, or nil if there's no error to recover from.
|
||||
func (client *Client) RecoverWithContext(
|
||||
ctx context.Context,
|
||||
err interface{},
|
||||
hint *EventHint,
|
||||
scope EventModifier,
|
||||
) *EventID {
|
||||
if err == nil {
|
||||
err = recover()
|
||||
}
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if ctx != nil {
|
||||
if hint == nil {
|
||||
hint = &EventHint{}
|
||||
}
|
||||
if hint.Context == nil {
|
||||
hint.Context = ctx
|
||||
}
|
||||
}
|
||||
|
||||
var event *Event
|
||||
switch err := err.(type) {
|
||||
case error:
|
||||
event = client.EventFromException(err, LevelFatal)
|
||||
case string:
|
||||
event = client.EventFromMessage(err, LevelFatal)
|
||||
default:
|
||||
event = client.EventFromMessage(fmt.Sprintf("%#v", err), LevelFatal)
|
||||
}
|
||||
return client.CaptureEvent(event, hint, scope)
|
||||
}
|
||||
|
||||
// Flush waits until the underlying Transport sends any buffered events to the
|
||||
// Sentry server, blocking for at most the given timeout. It returns false if
|
||||
// the timeout was reached. In that case, some events may not have been sent.
|
||||
//
|
||||
// Flush should be called before terminating the program to avoid
|
||||
// unintentionally dropping events.
|
||||
//
|
||||
// Do not call Flush indiscriminately after every call to CaptureEvent,
|
||||
// CaptureException or CaptureMessage. Instead, to have the SDK send events over
|
||||
// the network synchronously, configure it to use the HTTPSyncTransport in the
|
||||
// call to Init.
|
||||
func (client *Client) Flush(timeout time.Duration) bool {
|
||||
return client.Transport.Flush(timeout)
|
||||
}
|
||||
|
||||
// Close clean up underlying Transport resources.
|
||||
//
|
||||
// Close should be called after Flush and before terminating the program
|
||||
// otherwise some events may be lost.
|
||||
func (client *Client) Close() {
|
||||
client.Transport.Close()
|
||||
}
|
||||
|
||||
// EventFromMessage creates an event from the given message string.
|
||||
func (client *Client) EventFromMessage(message string, level Level) *Event {
|
||||
if message == "" {
|
||||
err := usageError{fmt.Errorf("%s called with empty message", callerFunctionName())}
|
||||
return client.EventFromException(err, level)
|
||||
}
|
||||
event := NewEvent()
|
||||
event.Level = level
|
||||
event.Message = message
|
||||
|
||||
if client.options.AttachStacktrace {
|
||||
event.Threads = []Thread{{
|
||||
Stacktrace: NewStacktrace(),
|
||||
Crashed: false,
|
||||
Current: true,
|
||||
}}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
// EventFromException creates a new Sentry event from the given `error` instance.
|
||||
func (client *Client) EventFromException(exception error, level Level) *Event {
|
||||
event := NewEvent()
|
||||
event.Level = level
|
||||
|
||||
err := exception
|
||||
if err == nil {
|
||||
err = usageError{fmt.Errorf("%s called with nil error", callerFunctionName())}
|
||||
}
|
||||
|
||||
event.SetException(err, client.options.MaxErrorDepth)
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
// EventFromCheckIn creates a new Sentry event from the given `check_in` instance.
|
||||
func (client *Client) EventFromCheckIn(checkIn *CheckIn, monitorConfig *MonitorConfig) *Event {
|
||||
if checkIn == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
event := NewEvent()
|
||||
event.Type = checkInType
|
||||
|
||||
var checkInID EventID
|
||||
if checkIn.ID == "" {
|
||||
checkInID = EventID(uuid())
|
||||
} else {
|
||||
checkInID = checkIn.ID
|
||||
}
|
||||
|
||||
event.CheckIn = &CheckIn{
|
||||
ID: checkInID,
|
||||
MonitorSlug: checkIn.MonitorSlug,
|
||||
Status: checkIn.Status,
|
||||
Duration: checkIn.Duration,
|
||||
}
|
||||
event.MonitorConfig = monitorConfig
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
func (client *Client) SetSDKIdentifier(identifier string) {
|
||||
client.mu.Lock()
|
||||
defer client.mu.Unlock()
|
||||
|
||||
client.sdkIdentifier = identifier
|
||||
}
|
||||
|
||||
func (client *Client) GetSDKIdentifier() string {
|
||||
client.mu.RLock()
|
||||
defer client.mu.RUnlock()
|
||||
|
||||
return client.sdkIdentifier
|
||||
}
|
||||
|
||||
func (client *Client) processEvent(event *Event, hint *EventHint, scope EventModifier) *EventID {
|
||||
if event == nil {
|
||||
err := usageError{fmt.Errorf("%s called with nil event", callerFunctionName())}
|
||||
return client.CaptureException(err, hint, scope)
|
||||
}
|
||||
|
||||
// Transactions are sampled by options.TracesSampleRate or
|
||||
// options.TracesSampler when they are started. Other events
|
||||
// (errors, messages) are sampled here. Does not apply to check-ins.
|
||||
if event.Type != transactionType && event.Type != checkInType && !sample(client.options.SampleRate) {
|
||||
Logger.Println("Event dropped due to SampleRate hit.")
|
||||
return nil
|
||||
}
|
||||
|
||||
if event = client.prepareEvent(event, hint, scope); event == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply beforeSend* processors
|
||||
if hint == nil {
|
||||
hint = &EventHint{}
|
||||
}
|
||||
if event.Type == transactionType && client.options.BeforeSendTransaction != nil {
|
||||
// Transaction events
|
||||
if event = client.options.BeforeSendTransaction(event, hint); event == nil {
|
||||
Logger.Println("Transaction dropped due to BeforeSendTransaction callback.")
|
||||
return nil
|
||||
}
|
||||
} else if event.Type != transactionType && event.Type != checkInType && client.options.BeforeSend != nil {
|
||||
// All other events
|
||||
if event = client.options.BeforeSend(event, hint); event == nil {
|
||||
Logger.Println("Event dropped due to BeforeSend callback.")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
client.Transport.SendEvent(event)
|
||||
|
||||
return &event.EventID
|
||||
}
|
||||
|
||||
func (client *Client) prepareEvent(event *Event, hint *EventHint, scope EventModifier) *Event {
|
||||
if event.EventID == "" {
|
||||
// TODO set EventID when the event is created, same as in other SDKs. It's necessary for profileTransaction.ID.
|
||||
event.EventID = EventID(uuid())
|
||||
}
|
||||
|
||||
if event.Timestamp.IsZero() {
|
||||
event.Timestamp = time.Now()
|
||||
}
|
||||
|
||||
if event.Level == "" {
|
||||
event.Level = LevelInfo
|
||||
}
|
||||
|
||||
if event.ServerName == "" {
|
||||
event.ServerName = client.options.ServerName
|
||||
|
||||
if event.ServerName == "" {
|
||||
event.ServerName = hostname
|
||||
}
|
||||
}
|
||||
|
||||
if event.Release == "" {
|
||||
event.Release = client.options.Release
|
||||
}
|
||||
|
||||
if event.Dist == "" {
|
||||
event.Dist = client.options.Dist
|
||||
}
|
||||
|
||||
if event.Environment == "" {
|
||||
event.Environment = client.options.Environment
|
||||
}
|
||||
|
||||
event.Platform = "go"
|
||||
event.Sdk = SdkInfo{
|
||||
Name: client.GetSDKIdentifier(),
|
||||
Version: SDKVersion,
|
||||
Integrations: client.listIntegrations(),
|
||||
Packages: []SdkPackage{{
|
||||
Name: "sentry-go",
|
||||
Version: SDKVersion,
|
||||
}},
|
||||
}
|
||||
|
||||
if scope != nil {
|
||||
event = scope.ApplyToEvent(event, hint, client)
|
||||
if event == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
for _, processor := range client.eventProcessors {
|
||||
id := event.EventID
|
||||
event = processor(event, hint)
|
||||
if event == nil {
|
||||
Logger.Printf("Event dropped by one of the Client EventProcessors: %s\n", id)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
for _, processor := range globalEventProcessors {
|
||||
id := event.EventID
|
||||
event = processor(event, hint)
|
||||
if event == nil {
|
||||
Logger.Printf("Event dropped by one of the Global EventProcessors: %s\n", id)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
func (client *Client) listIntegrations() []string {
|
||||
integrations := make([]string, len(client.integrations))
|
||||
for i, integration := range client.integrations {
|
||||
integrations[i] = integration.Name()
|
||||
}
|
||||
return integrations
|
||||
}
|
||||
|
||||
func (client *Client) integrationAlreadyInstalled(name string) bool {
|
||||
for _, integration := range client.integrations {
|
||||
if integration.Name() == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// sample returns true with the given probability, which must be in the range
|
||||
// [0.0, 1.0].
|
||||
func sample(probability float64) bool {
|
||||
return rng.Float64() < probability
|
||||
}
|
||||
6
vendor/github.com/getsentry/sentry-go/doc.go
generated
vendored
Normal file
6
vendor/github.com/getsentry/sentry-go/doc.go
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
/*
|
||||
Package repository: https://github.com/getsentry/sentry-go/
|
||||
|
||||
For more information about Sentry and SDK features, please have a look at the official documentation site: https://docs.sentry.io/platforms/go/
|
||||
*/
|
||||
package sentry
|
||||
233
vendor/github.com/getsentry/sentry-go/dsn.go
generated
vendored
Normal file
233
vendor/github.com/getsentry/sentry-go/dsn.go
generated
vendored
Normal file
@@ -0,0 +1,233 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type scheme string
|
||||
|
||||
const (
|
||||
schemeHTTP scheme = "http"
|
||||
schemeHTTPS scheme = "https"
|
||||
)
|
||||
|
||||
func (scheme scheme) defaultPort() int {
|
||||
switch scheme {
|
||||
case schemeHTTPS:
|
||||
return 443
|
||||
case schemeHTTP:
|
||||
return 80
|
||||
default:
|
||||
return 80
|
||||
}
|
||||
}
|
||||
|
||||
// DsnParseError represents an error that occurs if a Sentry
|
||||
// DSN cannot be parsed.
|
||||
type DsnParseError struct {
|
||||
Message string
|
||||
}
|
||||
|
||||
func (e DsnParseError) Error() string {
|
||||
return "[Sentry] DsnParseError: " + e.Message
|
||||
}
|
||||
|
||||
// Dsn is used as the remote address source to client transport.
|
||||
type Dsn struct {
|
||||
scheme scheme
|
||||
publicKey string
|
||||
secretKey string
|
||||
host string
|
||||
port int
|
||||
path string
|
||||
projectID string
|
||||
}
|
||||
|
||||
// NewDsn creates a Dsn by parsing rawURL. Most users will never call this
|
||||
// function directly. It is provided for use in custom Transport
|
||||
// implementations.
|
||||
func NewDsn(rawURL string) (*Dsn, error) {
|
||||
// Parse
|
||||
parsedURL, err := url.Parse(rawURL)
|
||||
if err != nil {
|
||||
return nil, &DsnParseError{fmt.Sprintf("invalid url: %v", err)}
|
||||
}
|
||||
|
||||
// Scheme
|
||||
var scheme scheme
|
||||
switch parsedURL.Scheme {
|
||||
case "http":
|
||||
scheme = schemeHTTP
|
||||
case "https":
|
||||
scheme = schemeHTTPS
|
||||
default:
|
||||
return nil, &DsnParseError{"invalid scheme"}
|
||||
}
|
||||
|
||||
// PublicKey
|
||||
publicKey := parsedURL.User.Username()
|
||||
if publicKey == "" {
|
||||
return nil, &DsnParseError{"empty username"}
|
||||
}
|
||||
|
||||
// SecretKey
|
||||
var secretKey string
|
||||
if parsedSecretKey, ok := parsedURL.User.Password(); ok {
|
||||
secretKey = parsedSecretKey
|
||||
}
|
||||
|
||||
// Host
|
||||
host := parsedURL.Hostname()
|
||||
if host == "" {
|
||||
return nil, &DsnParseError{"empty host"}
|
||||
}
|
||||
|
||||
// Port
|
||||
var port int
|
||||
if p := parsedURL.Port(); p != "" {
|
||||
port, err = strconv.Atoi(p)
|
||||
if err != nil {
|
||||
return nil, &DsnParseError{"invalid port"}
|
||||
}
|
||||
} else {
|
||||
port = scheme.defaultPort()
|
||||
}
|
||||
|
||||
// ProjectID
|
||||
if parsedURL.Path == "" || parsedURL.Path == "/" {
|
||||
return nil, &DsnParseError{"empty project id"}
|
||||
}
|
||||
pathSegments := strings.Split(parsedURL.Path[1:], "/")
|
||||
projectID := pathSegments[len(pathSegments)-1]
|
||||
|
||||
if projectID == "" {
|
||||
return nil, &DsnParseError{"empty project id"}
|
||||
}
|
||||
|
||||
// Path
|
||||
var path string
|
||||
if len(pathSegments) > 1 {
|
||||
path = "/" + strings.Join(pathSegments[0:len(pathSegments)-1], "/")
|
||||
}
|
||||
|
||||
return &Dsn{
|
||||
scheme: scheme,
|
||||
publicKey: publicKey,
|
||||
secretKey: secretKey,
|
||||
host: host,
|
||||
port: port,
|
||||
path: path,
|
||||
projectID: projectID,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// String formats Dsn struct into a valid string url.
|
||||
func (dsn Dsn) String() string {
|
||||
var url string
|
||||
url += fmt.Sprintf("%s://%s", dsn.scheme, dsn.publicKey)
|
||||
if dsn.secretKey != "" {
|
||||
url += fmt.Sprintf(":%s", dsn.secretKey)
|
||||
}
|
||||
url += fmt.Sprintf("@%s", dsn.host)
|
||||
if dsn.port != dsn.scheme.defaultPort() {
|
||||
url += fmt.Sprintf(":%d", dsn.port)
|
||||
}
|
||||
if dsn.path != "" {
|
||||
url += dsn.path
|
||||
}
|
||||
url += fmt.Sprintf("/%s", dsn.projectID)
|
||||
return url
|
||||
}
|
||||
|
||||
// Get the scheme of the DSN.
|
||||
func (dsn Dsn) GetScheme() string {
|
||||
return string(dsn.scheme)
|
||||
}
|
||||
|
||||
// Get the public key of the DSN.
|
||||
func (dsn Dsn) GetPublicKey() string {
|
||||
return dsn.publicKey
|
||||
}
|
||||
|
||||
// Get the secret key of the DSN.
|
||||
func (dsn Dsn) GetSecretKey() string {
|
||||
return dsn.secretKey
|
||||
}
|
||||
|
||||
// Get the host of the DSN.
|
||||
func (dsn Dsn) GetHost() string {
|
||||
return dsn.host
|
||||
}
|
||||
|
||||
// Get the port of the DSN.
|
||||
func (dsn Dsn) GetPort() int {
|
||||
return dsn.port
|
||||
}
|
||||
|
||||
// Get the path of the DSN.
|
||||
func (dsn Dsn) GetPath() string {
|
||||
return dsn.path
|
||||
}
|
||||
|
||||
// Get the project ID of the DSN.
|
||||
func (dsn Dsn) GetProjectID() string {
|
||||
return dsn.projectID
|
||||
}
|
||||
|
||||
// GetAPIURL returns the URL of the envelope endpoint of the project
|
||||
// associated with the DSN.
|
||||
func (dsn Dsn) GetAPIURL() *url.URL {
|
||||
var rawURL string
|
||||
rawURL += fmt.Sprintf("%s://%s", dsn.scheme, dsn.host)
|
||||
if dsn.port != dsn.scheme.defaultPort() {
|
||||
rawURL += fmt.Sprintf(":%d", dsn.port)
|
||||
}
|
||||
if dsn.path != "" {
|
||||
rawURL += dsn.path
|
||||
}
|
||||
rawURL += fmt.Sprintf("/api/%s/%s/", dsn.projectID, "envelope")
|
||||
parsedURL, _ := url.Parse(rawURL)
|
||||
return parsedURL
|
||||
}
|
||||
|
||||
// RequestHeaders returns all the necessary headers that have to be used in the transport when seinding events
|
||||
// to the /store endpoint.
|
||||
//
|
||||
// Deprecated: This method shall only be used if you want to implement your own transport that sends events to
|
||||
// the /store endpoint. If you're using the transport provided by the SDK, all necessary headers to authenticate
|
||||
// against the /envelope endpoint are added automatically.
|
||||
func (dsn Dsn) RequestHeaders() map[string]string {
|
||||
auth := fmt.Sprintf("Sentry sentry_version=%s, sentry_timestamp=%d, "+
|
||||
"sentry_client=sentry.go/%s, sentry_key=%s", apiVersion, time.Now().Unix(), SDKVersion, dsn.publicKey)
|
||||
|
||||
if dsn.secretKey != "" {
|
||||
auth = fmt.Sprintf("%s, sentry_secret=%s", auth, dsn.secretKey)
|
||||
}
|
||||
|
||||
return map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
"X-Sentry-Auth": auth,
|
||||
}
|
||||
}
|
||||
|
||||
// MarshalJSON converts the Dsn struct to JSON.
|
||||
func (dsn Dsn) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(dsn.String())
|
||||
}
|
||||
|
||||
// UnmarshalJSON converts JSON data to the Dsn struct.
|
||||
func (dsn *Dsn) UnmarshalJSON(data []byte) error {
|
||||
var str string
|
||||
_ = json.Unmarshal(data, &str)
|
||||
newDsn, err := NewDsn(str)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*dsn = *newDsn
|
||||
return nil
|
||||
}
|
||||
154
vendor/github.com/getsentry/sentry-go/dynamic_sampling_context.go
generated
vendored
Normal file
154
vendor/github.com/getsentry/sentry-go/dynamic_sampling_context.go
generated
vendored
Normal file
@@ -0,0 +1,154 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/getsentry/sentry-go/internal/otel/baggage"
|
||||
)
|
||||
|
||||
const (
|
||||
sentryPrefix = "sentry-"
|
||||
)
|
||||
|
||||
// DynamicSamplingContext holds information about the current event that can be used to make dynamic sampling decisions.
|
||||
type DynamicSamplingContext struct {
|
||||
Entries map[string]string
|
||||
Frozen bool
|
||||
}
|
||||
|
||||
func DynamicSamplingContextFromHeader(header []byte) (DynamicSamplingContext, error) {
|
||||
bag, err := baggage.Parse(string(header))
|
||||
if err != nil {
|
||||
return DynamicSamplingContext{}, err
|
||||
}
|
||||
|
||||
entries := map[string]string{}
|
||||
for _, member := range bag.Members() {
|
||||
// We only store baggage members if their key starts with "sentry-".
|
||||
if k, v := member.Key(), member.Value(); strings.HasPrefix(k, sentryPrefix) {
|
||||
entries[strings.TrimPrefix(k, sentryPrefix)] = v
|
||||
}
|
||||
}
|
||||
|
||||
return DynamicSamplingContext{
|
||||
Entries: entries,
|
||||
// If there's at least one Sentry value, we consider the DSC frozen
|
||||
Frozen: len(entries) > 0,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func DynamicSamplingContextFromTransaction(span *Span) DynamicSamplingContext {
|
||||
hub := hubFromContext(span.Context())
|
||||
scope := hub.Scope()
|
||||
client := hub.Client()
|
||||
|
||||
if client == nil || scope == nil {
|
||||
return DynamicSamplingContext{
|
||||
Entries: map[string]string{},
|
||||
Frozen: false,
|
||||
}
|
||||
}
|
||||
|
||||
entries := make(map[string]string)
|
||||
|
||||
if traceID := span.TraceID.String(); traceID != "" {
|
||||
entries["trace_id"] = traceID
|
||||
}
|
||||
if sampleRate := span.sampleRate; sampleRate != 0 {
|
||||
entries["sample_rate"] = strconv.FormatFloat(sampleRate, 'f', -1, 64)
|
||||
}
|
||||
|
||||
if dsn := client.dsn; dsn != nil {
|
||||
if publicKey := dsn.publicKey; publicKey != "" {
|
||||
entries["public_key"] = publicKey
|
||||
}
|
||||
}
|
||||
if release := client.options.Release; release != "" {
|
||||
entries["release"] = release
|
||||
}
|
||||
if environment := client.options.Environment; environment != "" {
|
||||
entries["environment"] = environment
|
||||
}
|
||||
|
||||
// Only include the transaction name if it's of good quality (not empty and not SourceURL)
|
||||
if span.Source != "" && span.Source != SourceURL {
|
||||
if span.IsTransaction() {
|
||||
entries["transaction"] = span.Name
|
||||
}
|
||||
}
|
||||
|
||||
entries["sampled"] = strconv.FormatBool(span.Sampled.Bool())
|
||||
|
||||
return DynamicSamplingContext{Entries: entries, Frozen: true}
|
||||
}
|
||||
|
||||
func (d DynamicSamplingContext) HasEntries() bool {
|
||||
return len(d.Entries) > 0
|
||||
}
|
||||
|
||||
func (d DynamicSamplingContext) IsFrozen() bool {
|
||||
return d.Frozen
|
||||
}
|
||||
|
||||
func (d DynamicSamplingContext) String() string {
|
||||
members := []baggage.Member{}
|
||||
for k, entry := range d.Entries {
|
||||
member, err := baggage.NewMember(sentryPrefix+k, entry)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
members = append(members, member)
|
||||
}
|
||||
|
||||
if len(members) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
baggage, err := baggage.New(members...)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return baggage.String()
|
||||
}
|
||||
|
||||
// Constructs a new DynamicSamplingContext using a scope and client. Accessing
|
||||
// fields on the scope are not thread safe, and this function should only be
|
||||
// called within scope methods.
|
||||
func DynamicSamplingContextFromScope(scope *Scope, client *Client) DynamicSamplingContext {
|
||||
entries := map[string]string{}
|
||||
|
||||
if client == nil || scope == nil {
|
||||
return DynamicSamplingContext{
|
||||
Entries: entries,
|
||||
Frozen: false,
|
||||
}
|
||||
}
|
||||
|
||||
propagationContext := scope.propagationContext
|
||||
|
||||
if traceID := propagationContext.TraceID.String(); traceID != "" {
|
||||
entries["trace_id"] = traceID
|
||||
}
|
||||
if sampleRate := client.options.TracesSampleRate; sampleRate != 0 {
|
||||
entries["sample_rate"] = strconv.FormatFloat(sampleRate, 'f', -1, 64)
|
||||
}
|
||||
|
||||
if dsn := client.dsn; dsn != nil {
|
||||
if publicKey := dsn.publicKey; publicKey != "" {
|
||||
entries["public_key"] = publicKey
|
||||
}
|
||||
}
|
||||
if release := client.options.Release; release != "" {
|
||||
entries["release"] = release
|
||||
}
|
||||
if environment := client.options.Environment; environment != "" {
|
||||
entries["environment"] = environment
|
||||
}
|
||||
|
||||
return DynamicSamplingContext{
|
||||
Entries: entries,
|
||||
Frozen: true,
|
||||
}
|
||||
}
|
||||
21
vendor/github.com/getsentry/sentry-go/echo/LICENSE
generated
vendored
Normal file
21
vendor/github.com/getsentry/sentry-go/echo/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2019 Functional Software, Inc. dba Sentry
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
135
vendor/github.com/getsentry/sentry-go/echo/README.md
generated
vendored
Normal file
135
vendor/github.com/getsentry/sentry-go/echo/README.md
generated
vendored
Normal file
@@ -0,0 +1,135 @@
|
||||
<p align="center">
|
||||
<a href="https://sentry.io" target="_blank" align="center">
|
||||
<img src="https://sentry-brand.storage.googleapis.com/sentry-logo-black.png" width="280">
|
||||
</a>
|
||||
<br />
|
||||
</p>
|
||||
|
||||
# Official Sentry Echo Handler for Sentry-go SDK
|
||||
|
||||
**go.dev:** https://pkg.go.dev/github.com/getsentry/sentry-go/echo
|
||||
|
||||
**Example:** https://github.com/getsentry/sentry-go/tree/master/_examples/echo
|
||||
|
||||
## Installation
|
||||
|
||||
```sh
|
||||
go get github.com/getsentry/sentry-go/echo
|
||||
```
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/getsentry/sentry-go"
|
||||
sentryecho "github.com/getsentry/sentry-go/echo"
|
||||
"github.com/labstack/echo/v4"
|
||||
"github.com/labstack/echo/v4/middleware"
|
||||
)
|
||||
|
||||
// To initialize Sentry's handler, you need to initialize Sentry itself beforehand
|
||||
if err := sentry.Init(sentry.ClientOptions{
|
||||
Dsn: "your-public-dsn",
|
||||
}); err != nil {
|
||||
fmt.Printf("Sentry initialization failed: %v\n", err)
|
||||
}
|
||||
|
||||
// Then create your app
|
||||
app := echo.New()
|
||||
|
||||
app.Use(middleware.Logger())
|
||||
app.Use(middleware.Recover())
|
||||
|
||||
// Once it's done, you can attach the handler as one of your middleware
|
||||
app.Use(sentryecho.New(sentryecho.Options{}))
|
||||
|
||||
// Set up routes
|
||||
app.GET("/", func(ctx echo.Context) error {
|
||||
return ctx.String(http.StatusOK, "Hello, World!")
|
||||
})
|
||||
|
||||
// And run it
|
||||
app.Logger.Fatal(app.Start(":3000"))
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
`sentryecho` accepts a struct of `Options` that allows you to configure how the handler will behave.
|
||||
|
||||
Currently it respects 3 options:
|
||||
|
||||
```go
|
||||
// Repanic configures whether Sentry should repanic after recovery, in most cases it should be set to true,
|
||||
// as echo includes its own Recover middleware that handles http responses.
|
||||
Repanic bool
|
||||
// WaitForDelivery configures whether you want to block the request before moving forward with the response.
|
||||
// Because Echo's `Recover` handler doesn't restart the application,
|
||||
// it's safe to either skip this option or set it to `false`.
|
||||
WaitForDelivery bool
|
||||
// Timeout for the event delivery requests.
|
||||
Timeout time.Duration
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
`sentryecho` attaches an instance of `*sentry.Hub` (https://pkg.go.dev/github.com/getsentry/sentry-go#Hub) to the `echo.Context`, which makes it available throughout the rest of the request's lifetime.
|
||||
You can access it by using the `sentryecho.GetHubFromContext()` method on the context itself in any of your proceeding middleware and routes.
|
||||
And it should be used instead of the global `sentry.CaptureMessage`, `sentry.CaptureException`, or any other calls, as it keeps the separation of data between the requests.
|
||||
|
||||
**Keep in mind that `*sentry.Hub` won't be available in middleware attached before to `sentryecho`!**
|
||||
|
||||
```go
|
||||
app := echo.New()
|
||||
|
||||
app.Use(middleware.Logger())
|
||||
app.Use(middleware.Recover())
|
||||
|
||||
app.Use(sentryecho.New(sentryecho.Options{
|
||||
Repanic: true,
|
||||
}))
|
||||
|
||||
app.Use(func(next echo.HandlerFunc) echo.HandlerFunc {
|
||||
return func(ctx echo.Context) error {
|
||||
if hub := sentryecho.GetHubFromContext(ctx); hub != nil {
|
||||
hub.Scope().SetTag("someRandomTag", "maybeYouNeedIt")
|
||||
}
|
||||
return next(ctx)
|
||||
}
|
||||
})
|
||||
|
||||
app.GET("/", func(ctx echo.Context) error {
|
||||
if hub := sentryecho.GetHubFromContext(ctx); hub != nil {
|
||||
hub.WithScope(func(scope *sentry.Scope) {
|
||||
scope.SetExtra("unwantedQuery", "someQueryDataMaybe")
|
||||
hub.CaptureMessage("User provided unwanted query string, but we recovered just fine")
|
||||
})
|
||||
}
|
||||
return ctx.String(http.StatusOK, "Hello, World!")
|
||||
})
|
||||
|
||||
app.GET("/foo", func(ctx echo.Context) error {
|
||||
// sentryecho handler will catch it just fine. Also, because we attached "someRandomTag"
|
||||
// in the middleware before, it will be sent through as well
|
||||
panic("y tho")
|
||||
})
|
||||
|
||||
app.Logger.Fatal(app.Start(":3000"))
|
||||
```
|
||||
|
||||
### Accessing Request in `BeforeSend` callback
|
||||
|
||||
```go
|
||||
sentry.Init(sentry.ClientOptions{
|
||||
Dsn: "your-public-dsn",
|
||||
BeforeSend: func(event *sentry.Event, hint *sentry.EventHint) *sentry.Event {
|
||||
if hint.Context != nil {
|
||||
if req, ok := hint.Context.Value(sentry.RequestContextKey).(*http.Request); ok {
|
||||
// You have access to the original Request here
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
},
|
||||
})
|
||||
```
|
||||
158
vendor/github.com/getsentry/sentry-go/echo/sentryecho.go
generated
vendored
Normal file
158
vendor/github.com/getsentry/sentry-go/echo/sentryecho.go
generated
vendored
Normal file
@@ -0,0 +1,158 @@
|
||||
package sentryecho
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/getsentry/sentry-go"
|
||||
"github.com/labstack/echo/v4"
|
||||
)
|
||||
|
||||
const (
|
||||
// sdkIdentifier is the identifier of the Echo SDK.
|
||||
sdkIdentifier = "sentry.go.echo"
|
||||
|
||||
// valuesKey is used as a key to store the Sentry Hub instance on the echo.Context.
|
||||
valuesKey = "sentry"
|
||||
|
||||
// transactionKey is used as a key to store the Sentry transaction on the echo.Context.
|
||||
transactionKey = "sentry_transaction"
|
||||
|
||||
// errorKey is used as a key to store the error on the echo.Context.
|
||||
errorKey = "error"
|
||||
)
|
||||
|
||||
type handler struct {
|
||||
repanic bool
|
||||
waitForDelivery bool
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
type Options struct {
|
||||
// Repanic configures whether Sentry should repanic after recovery, in most cases it should be set to true,
|
||||
// as Echo includes its own Recover middleware that handles HTTP responses.
|
||||
Repanic bool
|
||||
// WaitForDelivery configures whether you want to block the request before moving forward with the response.
|
||||
// Because Echo's Recover handler doesn't restart the application,
|
||||
// it's safe to either skip this option or set it to false.
|
||||
WaitForDelivery bool
|
||||
// Timeout for the event delivery requests.
|
||||
Timeout time.Duration
|
||||
}
|
||||
|
||||
// New returns a function that satisfies echo.HandlerFunc interface
|
||||
// It can be used with Use() methods.
|
||||
func New(options Options) echo.MiddlewareFunc {
|
||||
if options.Timeout == 0 {
|
||||
options.Timeout = 2 * time.Second
|
||||
}
|
||||
|
||||
return (&handler{
|
||||
repanic: options.Repanic,
|
||||
timeout: options.Timeout,
|
||||
waitForDelivery: options.WaitForDelivery,
|
||||
}).handle
|
||||
}
|
||||
|
||||
func (h *handler) handle(next echo.HandlerFunc) echo.HandlerFunc {
|
||||
return func(ctx echo.Context) error {
|
||||
hub := GetHubFromContext(ctx)
|
||||
if hub == nil {
|
||||
hub = sentry.CurrentHub().Clone()
|
||||
}
|
||||
|
||||
if client := hub.Client(); client != nil {
|
||||
client.SetSDKIdentifier(sdkIdentifier)
|
||||
}
|
||||
|
||||
r := ctx.Request()
|
||||
|
||||
transactionName := r.URL.Path
|
||||
transactionSource := sentry.SourceURL
|
||||
|
||||
if path := ctx.Path(); path != "" {
|
||||
transactionName = path
|
||||
transactionSource = sentry.SourceRoute
|
||||
}
|
||||
|
||||
options := []sentry.SpanOption{
|
||||
sentry.ContinueTrace(hub, r.Header.Get(sentry.SentryTraceHeader), r.Header.Get(sentry.SentryBaggageHeader)),
|
||||
sentry.WithOpName("http.server"),
|
||||
sentry.WithTransactionSource(transactionSource),
|
||||
sentry.WithSpanOrigin(sentry.SpanOriginEcho),
|
||||
}
|
||||
|
||||
transaction := sentry.StartTransaction(
|
||||
sentry.SetHubOnContext(r.Context(), hub),
|
||||
fmt.Sprintf("%s %s", r.Method, transactionName),
|
||||
options...,
|
||||
)
|
||||
|
||||
transaction.SetData("http.request.method", r.Method)
|
||||
|
||||
defer func() {
|
||||
status := ctx.Response().Status
|
||||
if err := ctx.Get(errorKey); err != nil {
|
||||
if httpError, ok := err.(*echo.HTTPError); ok {
|
||||
status = httpError.Code
|
||||
}
|
||||
}
|
||||
|
||||
transaction.Status = sentry.HTTPtoSpanStatus(status)
|
||||
transaction.SetData("http.response.status_code", status)
|
||||
transaction.Finish()
|
||||
}()
|
||||
|
||||
hub.Scope().SetRequest(r)
|
||||
ctx.Set(valuesKey, hub)
|
||||
ctx.Set(transactionKey, transaction)
|
||||
defer h.recoverWithSentry(hub, r)
|
||||
|
||||
err := next(ctx)
|
||||
if err != nil {
|
||||
// Store the error so it can be used in the deferred function
|
||||
ctx.Set(errorKey, err)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (h *handler) recoverWithSentry(hub *sentry.Hub, r *http.Request) {
|
||||
if err := recover(); err != nil {
|
||||
eventID := hub.RecoverWithContext(
|
||||
context.WithValue(r.Context(), sentry.RequestContextKey, r),
|
||||
err,
|
||||
)
|
||||
if eventID != nil && h.waitForDelivery {
|
||||
hub.Flush(h.timeout)
|
||||
}
|
||||
if h.repanic {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GetHubFromContext retrieves attached *sentry.Hub instance from echo.Context.
|
||||
func GetHubFromContext(ctx echo.Context) *sentry.Hub {
|
||||
if hub, ok := ctx.Get(valuesKey).(*sentry.Hub); ok {
|
||||
return hub
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetHubOnContext attaches *sentry.Hub instance to echo.Context.
|
||||
func SetHubOnContext(ctx echo.Context, hub *sentry.Hub) {
|
||||
ctx.Set(valuesKey, hub)
|
||||
}
|
||||
|
||||
// GetSpanFromContext retrieves attached *sentry.Span instance from echo.Context.
|
||||
// If there is no transaction on echo.Context, it will return nil.
|
||||
func GetSpanFromContext(ctx echo.Context) *sentry.Span {
|
||||
if span, ok := ctx.Get(transactionKey).(*sentry.Span); ok {
|
||||
return span
|
||||
}
|
||||
return nil
|
||||
}
|
||||
423
vendor/github.com/getsentry/sentry-go/hub.go
generated
vendored
Normal file
423
vendor/github.com/getsentry/sentry-go/hub.go
generated
vendored
Normal file
@@ -0,0 +1,423 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type contextKey int
|
||||
|
||||
// Keys used to store values in a Context. Use with Context.Value to access
|
||||
// values stored by the SDK.
|
||||
const (
|
||||
// HubContextKey is the key used to store the current Hub.
|
||||
HubContextKey = contextKey(1)
|
||||
// RequestContextKey is the key used to store the current http.Request.
|
||||
RequestContextKey = contextKey(2)
|
||||
)
|
||||
|
||||
// defaultMaxBreadcrumbs is the default maximum number of breadcrumbs added to
|
||||
// an event. Can be overwritten with the maxBreadcrumbs option.
|
||||
const defaultMaxBreadcrumbs = 30
|
||||
|
||||
// maxBreadcrumbs is the absolute maximum number of breadcrumbs added to an
|
||||
// event. The maxBreadcrumbs option cannot be set higher than this value.
|
||||
const maxBreadcrumbs = 100
|
||||
|
||||
// currentHub is the initial Hub with no Client bound and an empty Scope.
|
||||
var currentHub = NewHub(nil, NewScope())
|
||||
|
||||
// Hub is the central object that manages scopes and clients.
|
||||
//
|
||||
// This can be used to capture events and manage the scope.
|
||||
// The default hub that is available automatically.
|
||||
//
|
||||
// In most situations developers do not need to interface the hub. Instead
|
||||
// toplevel convenience functions are exposed that will automatically dispatch
|
||||
// to global (CurrentHub) hub. In some situations this might not be
|
||||
// possible in which case it might become necessary to manually work with the
|
||||
// hub. This is for instance the case when working with async code.
|
||||
type Hub struct {
|
||||
mu sync.RWMutex
|
||||
stack *stack
|
||||
lastEventID EventID
|
||||
}
|
||||
|
||||
type layer struct {
|
||||
// mu protects concurrent reads and writes to client.
|
||||
mu sync.RWMutex
|
||||
client *Client
|
||||
// scope is read-only, not protected by mu.
|
||||
scope *Scope
|
||||
}
|
||||
|
||||
// Client returns the layer's client. Safe for concurrent use.
|
||||
func (l *layer) Client() *Client {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
return l.client
|
||||
}
|
||||
|
||||
// SetClient sets the layer's client. Safe for concurrent use.
|
||||
func (l *layer) SetClient(c *Client) {
|
||||
l.mu.Lock()
|
||||
defer l.mu.Unlock()
|
||||
l.client = c
|
||||
}
|
||||
|
||||
type stack []*layer
|
||||
|
||||
// NewHub returns an instance of a Hub with provided Client and Scope bound.
|
||||
func NewHub(client *Client, scope *Scope) *Hub {
|
||||
hub := Hub{
|
||||
stack: &stack{{
|
||||
client: client,
|
||||
scope: scope,
|
||||
}},
|
||||
}
|
||||
return &hub
|
||||
}
|
||||
|
||||
// CurrentHub returns an instance of previously initialized Hub stored in the global namespace.
|
||||
func CurrentHub() *Hub {
|
||||
return currentHub
|
||||
}
|
||||
|
||||
// LastEventID returns the ID of the last event (error or message) captured
|
||||
// through the hub and sent to the underlying transport.
|
||||
//
|
||||
// Transactions and events dropped by sampling or event processors do not change
|
||||
// the last event ID.
|
||||
//
|
||||
// LastEventID is a convenience method to cover use cases in which errors are
|
||||
// captured indirectly and the ID is needed. For example, it can be used as part
|
||||
// of an HTTP middleware to log the ID of the last error, if any.
|
||||
//
|
||||
// For more flexibility, consider instead using the ClientOptions.BeforeSend
|
||||
// function or event processors.
|
||||
func (hub *Hub) LastEventID() EventID {
|
||||
hub.mu.RLock()
|
||||
defer hub.mu.RUnlock()
|
||||
|
||||
return hub.lastEventID
|
||||
}
|
||||
|
||||
// stackTop returns the top layer of the hub stack. Valid hubs always have at
|
||||
// least one layer, therefore stackTop always return a non-nil pointer.
|
||||
func (hub *Hub) stackTop() *layer {
|
||||
hub.mu.RLock()
|
||||
defer hub.mu.RUnlock()
|
||||
|
||||
stack := hub.stack
|
||||
stackLen := len(*stack)
|
||||
top := (*stack)[stackLen-1]
|
||||
return top
|
||||
}
|
||||
|
||||
// Clone returns a copy of the current Hub with top-most scope and client copied over.
|
||||
func (hub *Hub) Clone() *Hub {
|
||||
top := hub.stackTop()
|
||||
scope := top.scope
|
||||
if scope != nil {
|
||||
scope = scope.Clone()
|
||||
}
|
||||
return NewHub(top.Client(), scope)
|
||||
}
|
||||
|
||||
// Scope returns top-level Scope of the current Hub or nil if no Scope is bound.
|
||||
func (hub *Hub) Scope() *Scope {
|
||||
top := hub.stackTop()
|
||||
return top.scope
|
||||
}
|
||||
|
||||
// Client returns top-level Client of the current Hub or nil if no Client is bound.
|
||||
func (hub *Hub) Client() *Client {
|
||||
top := hub.stackTop()
|
||||
return top.Client()
|
||||
}
|
||||
|
||||
// PushScope pushes a new scope for the current Hub and reuses previously bound Client.
|
||||
func (hub *Hub) PushScope() *Scope {
|
||||
top := hub.stackTop()
|
||||
|
||||
var scope *Scope
|
||||
if top.scope != nil {
|
||||
scope = top.scope.Clone()
|
||||
} else {
|
||||
scope = NewScope()
|
||||
}
|
||||
|
||||
hub.mu.Lock()
|
||||
defer hub.mu.Unlock()
|
||||
|
||||
*hub.stack = append(*hub.stack, &layer{
|
||||
client: top.Client(),
|
||||
scope: scope,
|
||||
})
|
||||
|
||||
return scope
|
||||
}
|
||||
|
||||
// PopScope drops the most recent scope.
|
||||
//
|
||||
// Calls to PopScope must be coordinated with PushScope. For most cases, using
|
||||
// WithScope should be more convenient.
|
||||
//
|
||||
// Calls to PopScope that do not match previous calls to PushScope are silently
|
||||
// ignored.
|
||||
func (hub *Hub) PopScope() {
|
||||
hub.mu.Lock()
|
||||
defer hub.mu.Unlock()
|
||||
|
||||
stack := *hub.stack
|
||||
stackLen := len(stack)
|
||||
if stackLen > 1 {
|
||||
// Never pop the last item off the stack, the stack should always have
|
||||
// at least one item.
|
||||
*hub.stack = stack[0 : stackLen-1]
|
||||
}
|
||||
}
|
||||
|
||||
// BindClient binds a new Client for the current Hub.
|
||||
func (hub *Hub) BindClient(client *Client) {
|
||||
top := hub.stackTop()
|
||||
top.SetClient(client)
|
||||
}
|
||||
|
||||
// WithScope runs f in an isolated temporary scope.
|
||||
//
|
||||
// It is useful when extra data should be sent with a single capture call, for
|
||||
// instance a different level or tags.
|
||||
//
|
||||
// The scope passed to f starts as a clone of the current scope and can be
|
||||
// freely modified without affecting the current scope.
|
||||
//
|
||||
// It is a shorthand for PushScope followed by PopScope.
|
||||
func (hub *Hub) WithScope(f func(scope *Scope)) {
|
||||
scope := hub.PushScope()
|
||||
defer hub.PopScope()
|
||||
f(scope)
|
||||
}
|
||||
|
||||
// ConfigureScope runs f in the current scope.
|
||||
//
|
||||
// It is useful to set data that applies to all events that share the current
|
||||
// scope.
|
||||
//
|
||||
// Modifying the scope affects all references to the current scope.
|
||||
//
|
||||
// See also WithScope for making isolated temporary changes.
|
||||
func (hub *Hub) ConfigureScope(f func(scope *Scope)) {
|
||||
scope := hub.Scope()
|
||||
f(scope)
|
||||
}
|
||||
|
||||
// CaptureEvent calls the method of a same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns EventID if successfully, or nil if there's no Scope or Client available.
|
||||
func (hub *Hub) CaptureEvent(event *Event) *EventID {
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil || scope == nil {
|
||||
return nil
|
||||
}
|
||||
eventID := client.CaptureEvent(event, nil, scope)
|
||||
|
||||
if event.Type != transactionType && eventID != nil {
|
||||
hub.mu.Lock()
|
||||
hub.lastEventID = *eventID
|
||||
hub.mu.Unlock()
|
||||
}
|
||||
return eventID
|
||||
}
|
||||
|
||||
// CaptureMessage calls the method of a same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns EventID if successfully, or nil if there's no Scope or Client available.
|
||||
func (hub *Hub) CaptureMessage(message string) *EventID {
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil || scope == nil {
|
||||
return nil
|
||||
}
|
||||
eventID := client.CaptureMessage(message, nil, scope)
|
||||
|
||||
if eventID != nil {
|
||||
hub.mu.Lock()
|
||||
hub.lastEventID = *eventID
|
||||
hub.mu.Unlock()
|
||||
}
|
||||
return eventID
|
||||
}
|
||||
|
||||
// CaptureException calls the method of a same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns EventID if successfully, or nil if there's no Scope or Client available.
|
||||
func (hub *Hub) CaptureException(exception error) *EventID {
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil || scope == nil {
|
||||
return nil
|
||||
}
|
||||
eventID := client.CaptureException(exception, &EventHint{OriginalException: exception}, scope)
|
||||
|
||||
if eventID != nil {
|
||||
hub.mu.Lock()
|
||||
hub.lastEventID = *eventID
|
||||
hub.mu.Unlock()
|
||||
}
|
||||
return eventID
|
||||
}
|
||||
|
||||
// CaptureCheckIn calls the method of the same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns CheckInID if the check-in was captured successfully, or nil otherwise.
|
||||
func (hub *Hub) CaptureCheckIn(checkIn *CheckIn, monitorConfig *MonitorConfig) *EventID {
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return client.CaptureCheckIn(checkIn, monitorConfig, scope)
|
||||
}
|
||||
|
||||
// AddBreadcrumb records a new breadcrumb.
|
||||
//
|
||||
// The total number of breadcrumbs that can be recorded are limited by the
|
||||
// configuration on the client.
|
||||
func (hub *Hub) AddBreadcrumb(breadcrumb *Breadcrumb, hint *BreadcrumbHint) {
|
||||
client := hub.Client()
|
||||
|
||||
// If there's no client, just store it on the scope straight away
|
||||
if client == nil {
|
||||
hub.Scope().AddBreadcrumb(breadcrumb, maxBreadcrumbs)
|
||||
return
|
||||
}
|
||||
|
||||
max := client.options.MaxBreadcrumbs
|
||||
switch {
|
||||
case max < 0:
|
||||
return
|
||||
case max == 0:
|
||||
max = defaultMaxBreadcrumbs
|
||||
case max > maxBreadcrumbs:
|
||||
max = maxBreadcrumbs
|
||||
}
|
||||
|
||||
if client.options.BeforeBreadcrumb != nil {
|
||||
if hint == nil {
|
||||
hint = &BreadcrumbHint{}
|
||||
}
|
||||
if breadcrumb = client.options.BeforeBreadcrumb(breadcrumb, hint); breadcrumb == nil {
|
||||
Logger.Println("breadcrumb dropped due to BeforeBreadcrumb callback.")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
hub.Scope().AddBreadcrumb(breadcrumb, max)
|
||||
}
|
||||
|
||||
// Recover calls the method of a same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns EventID if successfully, or nil if there's no Scope or Client available.
|
||||
func (hub *Hub) Recover(err interface{}) *EventID {
|
||||
if err == nil {
|
||||
err = recover()
|
||||
}
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil || scope == nil {
|
||||
return nil
|
||||
}
|
||||
return client.Recover(err, &EventHint{RecoveredException: err}, scope)
|
||||
}
|
||||
|
||||
// RecoverWithContext calls the method of a same name on currently bound Client instance
|
||||
// passing it a top-level Scope.
|
||||
// Returns EventID if successfully, or nil if there's no Scope or Client available.
|
||||
func (hub *Hub) RecoverWithContext(ctx context.Context, err interface{}) *EventID {
|
||||
if err == nil {
|
||||
err = recover()
|
||||
}
|
||||
client, scope := hub.Client(), hub.Scope()
|
||||
if client == nil || scope == nil {
|
||||
return nil
|
||||
}
|
||||
return client.RecoverWithContext(ctx, err, &EventHint{RecoveredException: err}, scope)
|
||||
}
|
||||
|
||||
// Flush waits until the underlying Transport sends any buffered events to the
|
||||
// Sentry server, blocking for at most the given timeout. It returns false if
|
||||
// the timeout was reached. In that case, some events may not have been sent.
|
||||
//
|
||||
// Flush should be called before terminating the program to avoid
|
||||
// unintentionally dropping events.
|
||||
//
|
||||
// Do not call Flush indiscriminately after every call to CaptureEvent,
|
||||
// CaptureException or CaptureMessage. Instead, to have the SDK send events over
|
||||
// the network synchronously, configure it to use the HTTPSyncTransport in the
|
||||
// call to Init.
|
||||
func (hub *Hub) Flush(timeout time.Duration) bool {
|
||||
client := hub.Client()
|
||||
|
||||
if client == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return client.Flush(timeout)
|
||||
}
|
||||
|
||||
// GetTraceparent returns the current Sentry traceparent string, to be used as a HTTP header value
|
||||
// or HTML meta tag value.
|
||||
// This function is context aware, as in it either returns the traceparent based
|
||||
// on the current span, or the scope's propagation context.
|
||||
func (hub *Hub) GetTraceparent() string {
|
||||
scope := hub.Scope()
|
||||
|
||||
if scope.span != nil {
|
||||
return scope.span.ToSentryTrace()
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s-%s", scope.propagationContext.TraceID, scope.propagationContext.SpanID)
|
||||
}
|
||||
|
||||
// GetBaggage returns the current Sentry baggage string, to be used as a HTTP header value
|
||||
// or HTML meta tag value.
|
||||
// This function is context aware, as in it either returns the baggage based
|
||||
// on the current span or the scope's propagation context.
|
||||
func (hub *Hub) GetBaggage() string {
|
||||
scope := hub.Scope()
|
||||
|
||||
if scope.span != nil {
|
||||
return scope.span.ToBaggage()
|
||||
}
|
||||
|
||||
return scope.propagationContext.DynamicSamplingContext.String()
|
||||
}
|
||||
|
||||
// HasHubOnContext checks whether Hub instance is bound to a given Context struct.
|
||||
func HasHubOnContext(ctx context.Context) bool {
|
||||
_, ok := ctx.Value(HubContextKey).(*Hub)
|
||||
return ok
|
||||
}
|
||||
|
||||
// GetHubFromContext tries to retrieve Hub instance from the given Context struct
|
||||
// or return nil if one is not found.
|
||||
func GetHubFromContext(ctx context.Context) *Hub {
|
||||
if hub, ok := ctx.Value(HubContextKey).(*Hub); ok {
|
||||
return hub
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// hubFromContext returns either a hub stored in the context or the current hub.
|
||||
// The return value is guaranteed to be non-nil, unlike GetHubFromContext.
|
||||
func hubFromContext(ctx context.Context) *Hub {
|
||||
if hub, ok := ctx.Value(HubContextKey).(*Hub); ok {
|
||||
return hub
|
||||
}
|
||||
return currentHub
|
||||
}
|
||||
|
||||
// SetHubOnContext stores given Hub instance on the Context struct and returns a new Context.
|
||||
func SetHubOnContext(ctx context.Context, hub *Hub) context.Context {
|
||||
return context.WithValue(ctx, HubContextKey, hub)
|
||||
}
|
||||
391
vendor/github.com/getsentry/sentry-go/integrations.go
generated
vendored
Normal file
391
vendor/github.com/getsentry/sentry-go/integrations.go
generated
vendored
Normal file
@@ -0,0 +1,391 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// ================================
|
||||
// Modules Integration
|
||||
// ================================
|
||||
|
||||
type modulesIntegration struct {
|
||||
once sync.Once
|
||||
modules map[string]string
|
||||
}
|
||||
|
||||
func (mi *modulesIntegration) Name() string {
|
||||
return "Modules"
|
||||
}
|
||||
|
||||
func (mi *modulesIntegration) SetupOnce(client *Client) {
|
||||
client.AddEventProcessor(mi.processor)
|
||||
}
|
||||
|
||||
func (mi *modulesIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
if len(event.Modules) == 0 {
|
||||
mi.once.Do(func() {
|
||||
info, ok := debug.ReadBuildInfo()
|
||||
if !ok {
|
||||
Logger.Print("The Modules integration is not available in binaries built without module support.")
|
||||
return
|
||||
}
|
||||
mi.modules = extractModules(info)
|
||||
})
|
||||
}
|
||||
event.Modules = mi.modules
|
||||
return event
|
||||
}
|
||||
|
||||
func extractModules(info *debug.BuildInfo) map[string]string {
|
||||
modules := map[string]string{
|
||||
info.Main.Path: info.Main.Version,
|
||||
}
|
||||
for _, dep := range info.Deps {
|
||||
ver := dep.Version
|
||||
if dep.Replace != nil {
|
||||
ver += fmt.Sprintf(" => %s %s", dep.Replace.Path, dep.Replace.Version)
|
||||
}
|
||||
modules[dep.Path] = strings.TrimSuffix(ver, " ")
|
||||
}
|
||||
return modules
|
||||
}
|
||||
|
||||
// ================================
|
||||
// Environment Integration
|
||||
// ================================
|
||||
|
||||
type environmentIntegration struct{}
|
||||
|
||||
func (ei *environmentIntegration) Name() string {
|
||||
return "Environment"
|
||||
}
|
||||
|
||||
func (ei *environmentIntegration) SetupOnce(client *Client) {
|
||||
client.AddEventProcessor(ei.processor)
|
||||
}
|
||||
|
||||
func (ei *environmentIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
// Initialize maps as necessary.
|
||||
contextNames := []string{"device", "os", "runtime"}
|
||||
if event.Contexts == nil {
|
||||
event.Contexts = make(map[string]Context, len(contextNames))
|
||||
}
|
||||
for _, name := range contextNames {
|
||||
if event.Contexts[name] == nil {
|
||||
event.Contexts[name] = make(Context)
|
||||
}
|
||||
}
|
||||
|
||||
// Set contextual information preserving existing data. For each context, if
|
||||
// the existing value is not of type map[string]interface{}, then no
|
||||
// additional information is added.
|
||||
if deviceContext, ok := event.Contexts["device"]; ok {
|
||||
if _, ok := deviceContext["arch"]; !ok {
|
||||
deviceContext["arch"] = runtime.GOARCH
|
||||
}
|
||||
if _, ok := deviceContext["num_cpu"]; !ok {
|
||||
deviceContext["num_cpu"] = runtime.NumCPU()
|
||||
}
|
||||
}
|
||||
if osContext, ok := event.Contexts["os"]; ok {
|
||||
if _, ok := osContext["name"]; !ok {
|
||||
osContext["name"] = runtime.GOOS
|
||||
}
|
||||
}
|
||||
if runtimeContext, ok := event.Contexts["runtime"]; ok {
|
||||
if _, ok := runtimeContext["name"]; !ok {
|
||||
runtimeContext["name"] = "go"
|
||||
}
|
||||
if _, ok := runtimeContext["version"]; !ok {
|
||||
runtimeContext["version"] = runtime.Version()
|
||||
}
|
||||
if _, ok := runtimeContext["go_numroutines"]; !ok {
|
||||
runtimeContext["go_numroutines"] = runtime.NumGoroutine()
|
||||
}
|
||||
if _, ok := runtimeContext["go_maxprocs"]; !ok {
|
||||
runtimeContext["go_maxprocs"] = runtime.GOMAXPROCS(0)
|
||||
}
|
||||
if _, ok := runtimeContext["go_numcgocalls"]; !ok {
|
||||
runtimeContext["go_numcgocalls"] = runtime.NumCgoCall()
|
||||
}
|
||||
}
|
||||
return event
|
||||
}
|
||||
|
||||
// ================================
|
||||
// Ignore Errors Integration
|
||||
// ================================
|
||||
|
||||
type ignoreErrorsIntegration struct {
|
||||
ignoreErrors []*regexp.Regexp
|
||||
}
|
||||
|
||||
func (iei *ignoreErrorsIntegration) Name() string {
|
||||
return "IgnoreErrors"
|
||||
}
|
||||
|
||||
func (iei *ignoreErrorsIntegration) SetupOnce(client *Client) {
|
||||
iei.ignoreErrors = transformStringsIntoRegexps(client.options.IgnoreErrors)
|
||||
client.AddEventProcessor(iei.processor)
|
||||
}
|
||||
|
||||
func (iei *ignoreErrorsIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
suspects := getIgnoreErrorsSuspects(event)
|
||||
|
||||
for _, suspect := range suspects {
|
||||
for _, pattern := range iei.ignoreErrors {
|
||||
if pattern.Match([]byte(suspect)) || strings.Contains(suspect, pattern.String()) {
|
||||
Logger.Printf("Event dropped due to being matched by `IgnoreErrors` option."+
|
||||
"| Value matched: %s | Filter used: %s", suspect, pattern)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
func transformStringsIntoRegexps(strings []string) []*regexp.Regexp {
|
||||
var exprs []*regexp.Regexp
|
||||
|
||||
for _, s := range strings {
|
||||
r, err := regexp.Compile(s)
|
||||
if err == nil {
|
||||
exprs = append(exprs, r)
|
||||
}
|
||||
}
|
||||
|
||||
return exprs
|
||||
}
|
||||
|
||||
func getIgnoreErrorsSuspects(event *Event) []string {
|
||||
suspects := []string{}
|
||||
|
||||
if event.Message != "" {
|
||||
suspects = append(suspects, event.Message)
|
||||
}
|
||||
|
||||
for _, ex := range event.Exception {
|
||||
suspects = append(suspects, ex.Type, ex.Value)
|
||||
}
|
||||
|
||||
return suspects
|
||||
}
|
||||
|
||||
// ================================
|
||||
// Ignore Transactions Integration
|
||||
// ================================
|
||||
|
||||
type ignoreTransactionsIntegration struct {
|
||||
ignoreTransactions []*regexp.Regexp
|
||||
}
|
||||
|
||||
func (iei *ignoreTransactionsIntegration) Name() string {
|
||||
return "IgnoreTransactions"
|
||||
}
|
||||
|
||||
func (iei *ignoreTransactionsIntegration) SetupOnce(client *Client) {
|
||||
iei.ignoreTransactions = transformStringsIntoRegexps(client.options.IgnoreTransactions)
|
||||
client.AddEventProcessor(iei.processor)
|
||||
}
|
||||
|
||||
func (iei *ignoreTransactionsIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
suspect := event.Transaction
|
||||
if suspect == "" {
|
||||
return event
|
||||
}
|
||||
|
||||
for _, pattern := range iei.ignoreTransactions {
|
||||
if pattern.Match([]byte(suspect)) || strings.Contains(suspect, pattern.String()) {
|
||||
Logger.Printf("Transaction dropped due to being matched by `IgnoreTransactions` option."+
|
||||
"| Value matched: %s | Filter used: %s", suspect, pattern)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
// ================================
|
||||
// Contextify Frames Integration
|
||||
// ================================
|
||||
|
||||
type contextifyFramesIntegration struct {
|
||||
sr sourceReader
|
||||
contextLines int
|
||||
cachedLocations sync.Map
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) Name() string {
|
||||
return "ContextifyFrames"
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) SetupOnce(client *Client) {
|
||||
cfi.sr = newSourceReader()
|
||||
cfi.contextLines = 5
|
||||
|
||||
client.AddEventProcessor(cfi.processor)
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
// Range over all exceptions
|
||||
for _, ex := range event.Exception {
|
||||
// If it has no stacktrace, just bail out
|
||||
if ex.Stacktrace == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// If it does, it should have frames, so try to contextify them
|
||||
ex.Stacktrace.Frames = cfi.contextify(ex.Stacktrace.Frames)
|
||||
}
|
||||
|
||||
// Range over all threads
|
||||
for _, th := range event.Threads {
|
||||
// If it has no stacktrace, just bail out
|
||||
if th.Stacktrace == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// If it does, it should have frames, so try to contextify them
|
||||
th.Stacktrace.Frames = cfi.contextify(th.Stacktrace.Frames)
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) contextify(frames []Frame) []Frame {
|
||||
contextifiedFrames := make([]Frame, 0, len(frames))
|
||||
|
||||
for _, frame := range frames {
|
||||
if !frame.InApp {
|
||||
contextifiedFrames = append(contextifiedFrames, frame)
|
||||
continue
|
||||
}
|
||||
|
||||
var path string
|
||||
|
||||
if cachedPath, ok := cfi.cachedLocations.Load(frame.AbsPath); ok {
|
||||
if p, ok := cachedPath.(string); ok {
|
||||
path = p
|
||||
}
|
||||
} else {
|
||||
// Optimize for happy path here
|
||||
if fileExists(frame.AbsPath) {
|
||||
path = frame.AbsPath
|
||||
} else {
|
||||
path = cfi.findNearbySourceCodeLocation(frame.AbsPath)
|
||||
}
|
||||
}
|
||||
|
||||
if path == "" {
|
||||
contextifiedFrames = append(contextifiedFrames, frame)
|
||||
continue
|
||||
}
|
||||
|
||||
lines, contextLine := cfi.sr.readContextLines(path, frame.Lineno, cfi.contextLines)
|
||||
contextifiedFrames = append(contextifiedFrames, cfi.addContextLinesToFrame(frame, lines, contextLine))
|
||||
}
|
||||
|
||||
return contextifiedFrames
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) findNearbySourceCodeLocation(originalPath string) string {
|
||||
trimmedPath := strings.TrimPrefix(originalPath, "/")
|
||||
components := strings.Split(trimmedPath, "/")
|
||||
|
||||
for len(components) > 0 {
|
||||
components = components[1:]
|
||||
possibleLocation := strings.Join(components, "/")
|
||||
|
||||
if fileExists(possibleLocation) {
|
||||
cfi.cachedLocations.Store(originalPath, possibleLocation)
|
||||
return possibleLocation
|
||||
}
|
||||
}
|
||||
|
||||
cfi.cachedLocations.Store(originalPath, "")
|
||||
return ""
|
||||
}
|
||||
|
||||
func (cfi *contextifyFramesIntegration) addContextLinesToFrame(frame Frame, lines [][]byte, contextLine int) Frame {
|
||||
for i, line := range lines {
|
||||
switch {
|
||||
case i < contextLine:
|
||||
frame.PreContext = append(frame.PreContext, string(line))
|
||||
case i == contextLine:
|
||||
frame.ContextLine = string(line)
|
||||
default:
|
||||
frame.PostContext = append(frame.PostContext, string(line))
|
||||
}
|
||||
}
|
||||
return frame
|
||||
}
|
||||
|
||||
// ================================
|
||||
// Global Tags Integration
|
||||
// ================================
|
||||
|
||||
const envTagsPrefix = "SENTRY_TAGS_"
|
||||
|
||||
type globalTagsIntegration struct {
|
||||
tags map[string]string
|
||||
envTags map[string]string
|
||||
}
|
||||
|
||||
func (ti *globalTagsIntegration) Name() string {
|
||||
return "GlobalTags"
|
||||
}
|
||||
|
||||
func (ti *globalTagsIntegration) SetupOnce(client *Client) {
|
||||
ti.tags = make(map[string]string, len(client.options.Tags))
|
||||
for k, v := range client.options.Tags {
|
||||
ti.tags[k] = v
|
||||
}
|
||||
|
||||
ti.envTags = loadEnvTags()
|
||||
|
||||
client.AddEventProcessor(ti.processor)
|
||||
}
|
||||
|
||||
func (ti *globalTagsIntegration) processor(event *Event, _ *EventHint) *Event {
|
||||
if len(ti.tags) == 0 && len(ti.envTags) == 0 {
|
||||
return event
|
||||
}
|
||||
|
||||
if event.Tags == nil {
|
||||
event.Tags = make(map[string]string, len(ti.tags)+len(ti.envTags))
|
||||
}
|
||||
|
||||
for k, v := range ti.tags {
|
||||
if _, ok := event.Tags[k]; !ok {
|
||||
event.Tags[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range ti.envTags {
|
||||
if _, ok := event.Tags[k]; !ok {
|
||||
event.Tags[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
func loadEnvTags() map[string]string {
|
||||
tags := map[string]string{}
|
||||
for _, pair := range os.Environ() {
|
||||
parts := strings.Split(pair, "=")
|
||||
if !strings.HasPrefix(parts[0], envTagsPrefix) {
|
||||
continue
|
||||
}
|
||||
tag := strings.TrimPrefix(parts[0], envTagsPrefix)
|
||||
tags[tag] = parts[1]
|
||||
}
|
||||
return tags
|
||||
}
|
||||
549
vendor/github.com/getsentry/sentry-go/interfaces.go
generated
vendored
Normal file
549
vendor/github.com/getsentry/sentry-go/interfaces.go
generated
vendored
Normal file
@@ -0,0 +1,549 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// eventType is the type of an error event.
|
||||
const eventType = "event"
|
||||
|
||||
// transactionType is the type of a transaction event.
|
||||
const transactionType = "transaction"
|
||||
|
||||
// checkInType is the type of a check in event.
|
||||
const checkInType = "check_in"
|
||||
|
||||
// Level marks the severity of the event.
|
||||
type Level string
|
||||
|
||||
// Describes the severity of the event.
|
||||
const (
|
||||
LevelDebug Level = "debug"
|
||||
LevelInfo Level = "info"
|
||||
LevelWarning Level = "warning"
|
||||
LevelError Level = "error"
|
||||
LevelFatal Level = "fatal"
|
||||
)
|
||||
|
||||
// SdkInfo contains all metadata about about the SDK being used.
|
||||
type SdkInfo struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Version string `json:"version,omitempty"`
|
||||
Integrations []string `json:"integrations,omitempty"`
|
||||
Packages []SdkPackage `json:"packages,omitempty"`
|
||||
}
|
||||
|
||||
// SdkPackage describes a package that was installed.
|
||||
type SdkPackage struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Version string `json:"version,omitempty"`
|
||||
}
|
||||
|
||||
// TODO: This type could be more useful, as map of interface{} is too generic
|
||||
// and requires a lot of type assertions in beforeBreadcrumb calls
|
||||
// plus it could just be map[string]interface{} then.
|
||||
|
||||
// BreadcrumbHint contains information that can be associated with a Breadcrumb.
|
||||
type BreadcrumbHint map[string]interface{}
|
||||
|
||||
// Breadcrumb specifies an application event that occurred before a Sentry event.
|
||||
// An event may contain one or more breadcrumbs.
|
||||
type Breadcrumb struct {
|
||||
Type string `json:"type,omitempty"`
|
||||
Category string `json:"category,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Data map[string]interface{} `json:"data,omitempty"`
|
||||
Level Level `json:"level,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// TODO: provide constants for known breadcrumb types.
|
||||
// See https://develop.sentry.dev/sdk/event-payloads/breadcrumbs/#breadcrumb-types.
|
||||
|
||||
// MarshalJSON converts the Breadcrumb struct to JSON.
|
||||
func (b *Breadcrumb) MarshalJSON() ([]byte, error) {
|
||||
// We want to omit time.Time zero values, otherwise the server will try to
|
||||
// interpret dates too far in the past. However, encoding/json doesn't
|
||||
// support the "omitempty" option for struct types. See
|
||||
// https://golang.org/issues/11939.
|
||||
//
|
||||
// We overcome the limitation and achieve what we want by shadowing fields
|
||||
// and a few type tricks.
|
||||
|
||||
// breadcrumb aliases Breadcrumb to allow calling json.Marshal without an
|
||||
// infinite loop. It preserves all fields while none of the attached
|
||||
// methods.
|
||||
type breadcrumb Breadcrumb
|
||||
|
||||
if b.Timestamp.IsZero() {
|
||||
return json.Marshal(struct {
|
||||
// Embed all of the fields of Breadcrumb.
|
||||
*breadcrumb
|
||||
// Timestamp shadows the original Timestamp field and is meant to
|
||||
// remain nil, triggering the omitempty behavior.
|
||||
Timestamp json.RawMessage `json:"timestamp,omitempty"`
|
||||
}{breadcrumb: (*breadcrumb)(b)})
|
||||
}
|
||||
return json.Marshal((*breadcrumb)(b))
|
||||
}
|
||||
|
||||
// Attachment allows associating files with your events to aid in investigation.
|
||||
// An event may contain one or more attachments.
|
||||
type Attachment struct {
|
||||
Filename string
|
||||
ContentType string
|
||||
Payload []byte
|
||||
}
|
||||
|
||||
// User describes the user associated with an Event. If this is used, at least
|
||||
// an ID or an IP address should be provided.
|
||||
type User struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Email string `json:"email,omitempty"`
|
||||
IPAddress string `json:"ip_address,omitempty"`
|
||||
Username string `json:"username,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Data map[string]string `json:"data,omitempty"`
|
||||
}
|
||||
|
||||
func (u User) IsEmpty() bool {
|
||||
if u.ID != "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if u.Email != "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if u.IPAddress != "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if u.Username != "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if u.Name != "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if len(u.Data) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Request contains information on a HTTP request related to the event.
|
||||
type Request struct {
|
||||
URL string `json:"url,omitempty"`
|
||||
Method string `json:"method,omitempty"`
|
||||
Data string `json:"data,omitempty"`
|
||||
QueryString string `json:"query_string,omitempty"`
|
||||
Cookies string `json:"cookies,omitempty"`
|
||||
Headers map[string]string `json:"headers,omitempty"`
|
||||
Env map[string]string `json:"env,omitempty"`
|
||||
}
|
||||
|
||||
var sensitiveHeaders = map[string]struct{}{
|
||||
"Authorization": {},
|
||||
"Proxy-Authorization": {},
|
||||
"Cookie": {},
|
||||
"X-Forwarded-For": {},
|
||||
"X-Real-Ip": {},
|
||||
}
|
||||
|
||||
// NewRequest returns a new Sentry Request from the given http.Request.
|
||||
//
|
||||
// NewRequest avoids operations that depend on network access. In particular, it
|
||||
// does not read r.Body.
|
||||
func NewRequest(r *http.Request) *Request {
|
||||
protocol := schemeHTTP
|
||||
if r.TLS != nil || r.Header.Get("X-Forwarded-Proto") == "https" {
|
||||
protocol = schemeHTTPS
|
||||
}
|
||||
url := fmt.Sprintf("%s://%s%s", protocol, r.Host, r.URL.Path)
|
||||
|
||||
var cookies string
|
||||
var env map[string]string
|
||||
headers := map[string]string{}
|
||||
|
||||
if client := CurrentHub().Client(); client != nil && client.options.SendDefaultPII {
|
||||
// We read only the first Cookie header because of the specification:
|
||||
// https://tools.ietf.org/html/rfc6265#section-5.4
|
||||
// When the user agent generates an HTTP request, the user agent MUST NOT
|
||||
// attach more than one Cookie header field.
|
||||
cookies = r.Header.Get("Cookie")
|
||||
|
||||
headers = make(map[string]string, len(r.Header))
|
||||
for k, v := range r.Header {
|
||||
headers[k] = strings.Join(v, ",")
|
||||
}
|
||||
|
||||
if addr, port, err := net.SplitHostPort(r.RemoteAddr); err == nil {
|
||||
env = map[string]string{"REMOTE_ADDR": addr, "REMOTE_PORT": port}
|
||||
}
|
||||
} else {
|
||||
for k, v := range r.Header {
|
||||
if _, ok := sensitiveHeaders[k]; !ok {
|
||||
headers[k] = strings.Join(v, ",")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
headers["Host"] = r.Host
|
||||
|
||||
return &Request{
|
||||
URL: url,
|
||||
Method: r.Method,
|
||||
QueryString: r.URL.RawQuery,
|
||||
Cookies: cookies,
|
||||
Headers: headers,
|
||||
Env: env,
|
||||
}
|
||||
}
|
||||
|
||||
// Mechanism is the mechanism by which an exception was generated and handled.
|
||||
type Mechanism struct {
|
||||
Type string `json:"type,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
HelpLink string `json:"help_link,omitempty"`
|
||||
Source string `json:"source,omitempty"`
|
||||
Handled *bool `json:"handled,omitempty"`
|
||||
ParentID *int `json:"parent_id,omitempty"`
|
||||
ExceptionID int `json:"exception_id"`
|
||||
IsExceptionGroup bool `json:"is_exception_group,omitempty"`
|
||||
Data map[string]any `json:"data,omitempty"`
|
||||
}
|
||||
|
||||
// SetUnhandled indicates that the exception is an unhandled exception, i.e.
|
||||
// from a panic.
|
||||
func (m *Mechanism) SetUnhandled() {
|
||||
m.Handled = Pointer(false)
|
||||
}
|
||||
|
||||
// Exception specifies an error that occurred.
|
||||
type Exception struct {
|
||||
Type string `json:"type,omitempty"` // used as the main issue title
|
||||
Value string `json:"value,omitempty"` // used as the main issue subtitle
|
||||
Module string `json:"module,omitempty"`
|
||||
ThreadID uint64 `json:"thread_id,omitempty"`
|
||||
Stacktrace *Stacktrace `json:"stacktrace,omitempty"`
|
||||
Mechanism *Mechanism `json:"mechanism,omitempty"`
|
||||
}
|
||||
|
||||
// SDKMetaData is a struct to stash data which is needed at some point in the SDK's event processing pipeline
|
||||
// but which shouldn't get send to Sentry.
|
||||
type SDKMetaData struct {
|
||||
dsc DynamicSamplingContext
|
||||
}
|
||||
|
||||
// Contains information about how the name of the transaction was determined.
|
||||
type TransactionInfo struct {
|
||||
Source TransactionSource `json:"source,omitempty"`
|
||||
}
|
||||
|
||||
// The DebugMeta interface is not used in Golang apps, but may be populated
|
||||
// when proxying Events from other platforms, like iOS, Android, and the
|
||||
// Web. (See: https://develop.sentry.dev/sdk/event-payloads/debugmeta/ ).
|
||||
type DebugMeta struct {
|
||||
SdkInfo *DebugMetaSdkInfo `json:"sdk_info,omitempty"`
|
||||
Images []DebugMetaImage `json:"images,omitempty"`
|
||||
}
|
||||
|
||||
type DebugMetaSdkInfo struct {
|
||||
SdkName string `json:"sdk_name,omitempty"`
|
||||
VersionMajor int `json:"version_major,omitempty"`
|
||||
VersionMinor int `json:"version_minor,omitempty"`
|
||||
VersionPatchlevel int `json:"version_patchlevel,omitempty"`
|
||||
}
|
||||
|
||||
type DebugMetaImage struct {
|
||||
Type string `json:"type,omitempty"` // all
|
||||
ImageAddr string `json:"image_addr,omitempty"` // macho,elf,pe
|
||||
ImageSize int `json:"image_size,omitempty"` // macho,elf,pe
|
||||
DebugID string `json:"debug_id,omitempty"` // macho,elf,pe,wasm,sourcemap
|
||||
DebugFile string `json:"debug_file,omitempty"` // macho,elf,pe,wasm
|
||||
CodeID string `json:"code_id,omitempty"` // macho,elf,pe,wasm
|
||||
CodeFile string `json:"code_file,omitempty"` // macho,elf,pe,wasm,sourcemap
|
||||
ImageVmaddr string `json:"image_vmaddr,omitempty"` // macho,elf,pe
|
||||
Arch string `json:"arch,omitempty"` // macho,elf,pe
|
||||
UUID string `json:"uuid,omitempty"` // proguard
|
||||
}
|
||||
|
||||
// EventID is a hexadecimal string representing a unique uuid4 for an Event.
|
||||
// An EventID must be 32 characters long, lowercase and not have any dashes.
|
||||
type EventID string
|
||||
|
||||
type Context = map[string]interface{}
|
||||
|
||||
// Event is the fundamental data structure that is sent to Sentry.
|
||||
type Event struct {
|
||||
Breadcrumbs []*Breadcrumb `json:"breadcrumbs,omitempty"`
|
||||
Contexts map[string]Context `json:"contexts,omitempty"`
|
||||
Dist string `json:"dist,omitempty"`
|
||||
Environment string `json:"environment,omitempty"`
|
||||
EventID EventID `json:"event_id,omitempty"`
|
||||
Extra map[string]interface{} `json:"extra,omitempty"`
|
||||
Fingerprint []string `json:"fingerprint,omitempty"`
|
||||
Level Level `json:"level,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Platform string `json:"platform,omitempty"`
|
||||
Release string `json:"release,omitempty"`
|
||||
Sdk SdkInfo `json:"sdk,omitempty"`
|
||||
ServerName string `json:"server_name,omitempty"`
|
||||
Threads []Thread `json:"threads,omitempty"`
|
||||
Tags map[string]string `json:"tags,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Transaction string `json:"transaction,omitempty"`
|
||||
User User `json:"user,omitempty"`
|
||||
Logger string `json:"logger,omitempty"`
|
||||
Modules map[string]string `json:"modules,omitempty"`
|
||||
Request *Request `json:"request,omitempty"`
|
||||
Exception []Exception `json:"exception,omitempty"`
|
||||
DebugMeta *DebugMeta `json:"debug_meta,omitempty"`
|
||||
Attachments []*Attachment `json:"-"`
|
||||
|
||||
// The fields below are only relevant for transactions.
|
||||
|
||||
Type string `json:"type,omitempty"`
|
||||
StartTime time.Time `json:"start_timestamp"`
|
||||
Spans []*Span `json:"spans,omitempty"`
|
||||
TransactionInfo *TransactionInfo `json:"transaction_info,omitempty"`
|
||||
|
||||
// The fields below are only relevant for crons/check ins
|
||||
|
||||
CheckIn *CheckIn `json:"check_in,omitempty"`
|
||||
MonitorConfig *MonitorConfig `json:"monitor_config,omitempty"`
|
||||
|
||||
// The fields below are not part of the final JSON payload.
|
||||
|
||||
sdkMetaData SDKMetaData
|
||||
}
|
||||
|
||||
// SetException appends the unwrapped errors to the event's exception list.
|
||||
//
|
||||
// maxErrorDepth is the maximum depth of the error chain we will look
|
||||
// into while unwrapping the errors. If maxErrorDepth is -1, we will
|
||||
// unwrap all errors in the chain.
|
||||
func (e *Event) SetException(exception error, maxErrorDepth int) {
|
||||
if exception == nil {
|
||||
return
|
||||
}
|
||||
|
||||
err := exception
|
||||
|
||||
for i := 0; err != nil && (i < maxErrorDepth || maxErrorDepth == -1); i++ {
|
||||
// Add the current error to the exception slice with its details
|
||||
e.Exception = append(e.Exception, Exception{
|
||||
Value: err.Error(),
|
||||
Type: reflect.TypeOf(err).String(),
|
||||
Stacktrace: ExtractStacktrace(err),
|
||||
})
|
||||
|
||||
// Attempt to unwrap the error using the standard library's Unwrap method.
|
||||
// If errors.Unwrap returns nil, it means either there is no error to unwrap,
|
||||
// or the error does not implement the Unwrap method.
|
||||
unwrappedErr := errors.Unwrap(err)
|
||||
|
||||
if unwrappedErr != nil {
|
||||
// The error was successfully unwrapped using the standard library's Unwrap method.
|
||||
err = unwrappedErr
|
||||
continue
|
||||
}
|
||||
|
||||
cause, ok := err.(interface{ Cause() error })
|
||||
if !ok {
|
||||
// We cannot unwrap the error further.
|
||||
break
|
||||
}
|
||||
|
||||
// The error implements the Cause method, indicating it may have been wrapped
|
||||
// using the github.com/pkg/errors package.
|
||||
err = cause.Cause()
|
||||
}
|
||||
|
||||
// Add a trace of the current stack to the most recent error in a chain if
|
||||
// it doesn't have a stack trace yet.
|
||||
// We only add to the most recent error to avoid duplication and because the
|
||||
// current stack is most likely unrelated to errors deeper in the chain.
|
||||
if e.Exception[0].Stacktrace == nil {
|
||||
e.Exception[0].Stacktrace = NewStacktrace()
|
||||
}
|
||||
|
||||
if len(e.Exception) <= 1 {
|
||||
return
|
||||
}
|
||||
|
||||
// event.Exception should be sorted such that the most recent error is last.
|
||||
slices.Reverse(e.Exception)
|
||||
|
||||
for i := range e.Exception {
|
||||
e.Exception[i].Mechanism = &Mechanism{
|
||||
IsExceptionGroup: true,
|
||||
ExceptionID: i,
|
||||
Type: "generic",
|
||||
}
|
||||
if i == 0 {
|
||||
continue
|
||||
}
|
||||
e.Exception[i].Mechanism.ParentID = Pointer(i - 1)
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Event.Contexts map[string]interface{} => map[string]EventContext,
|
||||
// to prevent accidentally storing T when we mean *T.
|
||||
// For example, the TraceContext must be stored as *TraceContext to pick up the
|
||||
// MarshalJSON method (and avoid copying).
|
||||
// type EventContext interface{ EventContext() }
|
||||
|
||||
// MarshalJSON converts the Event struct to JSON.
|
||||
func (e *Event) MarshalJSON() ([]byte, error) {
|
||||
// We want to omit time.Time zero values, otherwise the server will try to
|
||||
// interpret dates too far in the past. However, encoding/json doesn't
|
||||
// support the "omitempty" option for struct types. See
|
||||
// https://golang.org/issues/11939.
|
||||
//
|
||||
// We overcome the limitation and achieve what we want by shadowing fields
|
||||
// and a few type tricks.
|
||||
if e.Type == transactionType {
|
||||
return e.transactionMarshalJSON()
|
||||
}
|
||||
|
||||
if e.Type == checkInType {
|
||||
return e.checkInMarshalJSON()
|
||||
}
|
||||
return e.defaultMarshalJSON()
|
||||
}
|
||||
|
||||
func (e *Event) defaultMarshalJSON() ([]byte, error) {
|
||||
// event aliases Event to allow calling json.Marshal without an infinite
|
||||
// loop. It preserves all fields while none of the attached methods.
|
||||
type event Event
|
||||
|
||||
// errorEvent is like Event with shadowed fields for customizing JSON
|
||||
// marshaling.
|
||||
type errorEvent struct {
|
||||
*event
|
||||
|
||||
// Timestamp shadows the original Timestamp field. It allows us to
|
||||
// include the timestamp when non-zero and omit it otherwise.
|
||||
Timestamp json.RawMessage `json:"timestamp,omitempty"`
|
||||
|
||||
// The fields below are not part of error events and only make sense to
|
||||
// be sent for transactions. They shadow the respective fields in Event
|
||||
// and are meant to remain nil, triggering the omitempty behavior.
|
||||
|
||||
Type json.RawMessage `json:"type,omitempty"`
|
||||
StartTime json.RawMessage `json:"start_timestamp,omitempty"`
|
||||
Spans json.RawMessage `json:"spans,omitempty"`
|
||||
TransactionInfo json.RawMessage `json:"transaction_info,omitempty"`
|
||||
}
|
||||
|
||||
x := errorEvent{event: (*event)(e)}
|
||||
if !e.Timestamp.IsZero() {
|
||||
b, err := e.Timestamp.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x.Timestamp = b
|
||||
}
|
||||
return json.Marshal(x)
|
||||
}
|
||||
|
||||
func (e *Event) transactionMarshalJSON() ([]byte, error) {
|
||||
// event aliases Event to allow calling json.Marshal without an infinite
|
||||
// loop. It preserves all fields while none of the attached methods.
|
||||
type event Event
|
||||
|
||||
// transactionEvent is like Event with shadowed fields for customizing JSON
|
||||
// marshaling.
|
||||
type transactionEvent struct {
|
||||
*event
|
||||
|
||||
// The fields below shadow the respective fields in Event. They allow us
|
||||
// to include timestamps when non-zero and omit them otherwise.
|
||||
|
||||
StartTime json.RawMessage `json:"start_timestamp,omitempty"`
|
||||
Timestamp json.RawMessage `json:"timestamp,omitempty"`
|
||||
}
|
||||
|
||||
x := transactionEvent{event: (*event)(e)}
|
||||
if !e.Timestamp.IsZero() {
|
||||
b, err := e.Timestamp.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x.Timestamp = b
|
||||
}
|
||||
if !e.StartTime.IsZero() {
|
||||
b, err := e.StartTime.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x.StartTime = b
|
||||
}
|
||||
return json.Marshal(x)
|
||||
}
|
||||
|
||||
func (e *Event) checkInMarshalJSON() ([]byte, error) {
|
||||
checkIn := serializedCheckIn{
|
||||
CheckInID: string(e.CheckIn.ID),
|
||||
MonitorSlug: e.CheckIn.MonitorSlug,
|
||||
Status: e.CheckIn.Status,
|
||||
Duration: e.CheckIn.Duration.Seconds(),
|
||||
Release: e.Release,
|
||||
Environment: e.Environment,
|
||||
MonitorConfig: nil,
|
||||
}
|
||||
|
||||
if e.MonitorConfig != nil {
|
||||
checkIn.MonitorConfig = &MonitorConfig{
|
||||
Schedule: e.MonitorConfig.Schedule,
|
||||
CheckInMargin: e.MonitorConfig.CheckInMargin,
|
||||
MaxRuntime: e.MonitorConfig.MaxRuntime,
|
||||
Timezone: e.MonitorConfig.Timezone,
|
||||
}
|
||||
}
|
||||
|
||||
return json.Marshal(checkIn)
|
||||
}
|
||||
|
||||
// NewEvent creates a new Event.
|
||||
func NewEvent() *Event {
|
||||
return &Event{
|
||||
Contexts: make(map[string]Context),
|
||||
Extra: make(map[string]interface{}),
|
||||
Tags: make(map[string]string),
|
||||
Modules: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
// Thread specifies threads that were running at the time of an event.
|
||||
type Thread struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Stacktrace *Stacktrace `json:"stacktrace,omitempty"`
|
||||
Crashed bool `json:"crashed,omitempty"`
|
||||
Current bool `json:"current,omitempty"`
|
||||
}
|
||||
|
||||
// EventHint contains information that can be associated with an Event.
|
||||
type EventHint struct {
|
||||
Data interface{}
|
||||
EventID string
|
||||
OriginalException error
|
||||
RecoveredException interface{}
|
||||
Context context.Context
|
||||
Request *http.Request
|
||||
Response *http.Response
|
||||
}
|
||||
79
vendor/github.com/getsentry/sentry-go/internal/debug/transport.go
generated
vendored
Normal file
79
vendor/github.com/getsentry/sentry-go/internal/debug/transport.go
generated
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
package debug
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptrace"
|
||||
"net/http/httputil"
|
||||
)
|
||||
|
||||
// Transport implements http.RoundTripper and can be used to wrap other HTTP
|
||||
// transports for debugging, normally http.DefaultTransport.
|
||||
type Transport struct {
|
||||
http.RoundTripper
|
||||
Output io.Writer
|
||||
// Dump controls whether to dump HTTP request and responses.
|
||||
Dump bool
|
||||
// Trace enables usage of net/http/httptrace.
|
||||
Trace bool
|
||||
}
|
||||
|
||||
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
|
||||
var buf bytes.Buffer
|
||||
if t.Dump {
|
||||
b, err := httputil.DumpRequestOut(req, true)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = buf.Write(ensureTrailingNewline(b))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
if t.Trace {
|
||||
trace := &httptrace.ClientTrace{
|
||||
DNSDone: func(di httptrace.DNSDoneInfo) {
|
||||
fmt.Fprintf(&buf, "* DNS %v → %v\n", req.Host, di.Addrs)
|
||||
},
|
||||
GotConn: func(ci httptrace.GotConnInfo) {
|
||||
fmt.Fprintf(&buf, "* Connection local=%v remote=%v", ci.Conn.LocalAddr(), ci.Conn.RemoteAddr())
|
||||
if ci.Reused {
|
||||
fmt.Fprint(&buf, " (reused)")
|
||||
}
|
||||
if ci.WasIdle {
|
||||
fmt.Fprintf(&buf, " (idle %v)", ci.IdleTime)
|
||||
}
|
||||
fmt.Fprintln(&buf)
|
||||
},
|
||||
}
|
||||
req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
|
||||
}
|
||||
resp, err := t.RoundTripper.RoundTrip(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t.Dump {
|
||||
b, err := httputil.DumpResponse(resp, true)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = buf.Write(ensureTrailingNewline(b))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
_, err = io.Copy(t.Output, &buf)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func ensureTrailingNewline(b []byte) []byte {
|
||||
if len(b) > 0 && b[len(b)-1] != '\n' {
|
||||
b = append(b, '\n')
|
||||
}
|
||||
return b
|
||||
}
|
||||
12
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/README.md
generated
vendored
Normal file
12
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/README.md
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
## Why do we have this "otel/baggage" folder?
|
||||
|
||||
The root sentry-go SDK (namely, the Dynamic Sampling functionality) needs an implementation of the [baggage spec](https://www.w3.org/TR/baggage/).
|
||||
For that reason, we've taken the existing baggage implementation from the [opentelemetry-go](https://github.com/open-telemetry/opentelemetry-go/) repository, and fixed a few things that in our opinion were violating the specification.
|
||||
|
||||
These issues are:
|
||||
1. Baggage string value `one%20two` should be properly parsed as "one two"
|
||||
1. Baggage string value `one+two` should be parsed as "one+two"
|
||||
1. Go string value "one two" should be encoded as `one%20two` (percent encoding), and NOT as `one+two` (URL query encoding).
|
||||
1. Go string value "1=1" might be encoded as `1=1`, because the spec says: "Note, value MAY contain any number of the equal sign (=) characters. Parsers MUST NOT assume that the equal sign is only used to separate key and value.". `1%3D1` is also valid, but to simplify the implementation we're not doing it.
|
||||
|
||||
Changes were made in this PR: https://github.com/getsentry/sentry-go/pull/568
|
||||
604
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/baggage.go
generated
vendored
Normal file
604
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/baggage.go
generated
vendored
Normal file
@@ -0,0 +1,604 @@
|
||||
// Adapted from https://github.com/open-telemetry/opentelemetry-go/blob/c21b6b6bb31a2f74edd06e262f1690f3f6ea3d5c/baggage/baggage.go
|
||||
//
|
||||
// Copyright The OpenTelemetry Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package baggage
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/getsentry/sentry-go/internal/otel/baggage/internal/baggage"
|
||||
)
|
||||
|
||||
const (
|
||||
maxMembers = 180
|
||||
maxBytesPerMembers = 4096
|
||||
maxBytesPerBaggageString = 8192
|
||||
|
||||
listDelimiter = ","
|
||||
keyValueDelimiter = "="
|
||||
propertyDelimiter = ";"
|
||||
|
||||
keyDef = `([\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5a\x5e-\x7a\x7c\x7e]+)`
|
||||
valueDef = `([\x21\x23-\x2b\x2d-\x3a\x3c-\x5B\x5D-\x7e]*)`
|
||||
keyValueDef = `\s*` + keyDef + `\s*` + keyValueDelimiter + `\s*` + valueDef + `\s*`
|
||||
)
|
||||
|
||||
var (
|
||||
keyRe = regexp.MustCompile(`^` + keyDef + `$`)
|
||||
valueRe = regexp.MustCompile(`^` + valueDef + `$`)
|
||||
propertyRe = regexp.MustCompile(`^(?:\s*` + keyDef + `\s*|` + keyValueDef + `)$`)
|
||||
)
|
||||
|
||||
var (
|
||||
errInvalidKey = errors.New("invalid key")
|
||||
errInvalidValue = errors.New("invalid value")
|
||||
errInvalidProperty = errors.New("invalid baggage list-member property")
|
||||
errInvalidMember = errors.New("invalid baggage list-member")
|
||||
errMemberNumber = errors.New("too many list-members in baggage-string")
|
||||
errMemberBytes = errors.New("list-member too large")
|
||||
errBaggageBytes = errors.New("baggage-string too large")
|
||||
)
|
||||
|
||||
// Property is an additional metadata entry for a baggage list-member.
|
||||
type Property struct {
|
||||
key, value string
|
||||
|
||||
// hasValue indicates if a zero-value value means the property does not
|
||||
// have a value or if it was the zero-value.
|
||||
hasValue bool
|
||||
|
||||
// hasData indicates whether the created property contains data or not.
|
||||
// Properties that do not contain data are invalid with no other check
|
||||
// required.
|
||||
hasData bool
|
||||
}
|
||||
|
||||
// NewKeyProperty returns a new Property for key.
|
||||
//
|
||||
// If key is invalid, an error will be returned.
|
||||
func NewKeyProperty(key string) (Property, error) {
|
||||
if !keyRe.MatchString(key) {
|
||||
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidKey, key)
|
||||
}
|
||||
|
||||
p := Property{key: key, hasData: true}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// NewKeyValueProperty returns a new Property for key with value.
|
||||
//
|
||||
// If key or value are invalid, an error will be returned.
|
||||
func NewKeyValueProperty(key, value string) (Property, error) {
|
||||
if !keyRe.MatchString(key) {
|
||||
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidKey, key)
|
||||
}
|
||||
if !valueRe.MatchString(value) {
|
||||
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidValue, value)
|
||||
}
|
||||
|
||||
p := Property{
|
||||
key: key,
|
||||
value: value,
|
||||
hasValue: true,
|
||||
hasData: true,
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func newInvalidProperty() Property {
|
||||
return Property{}
|
||||
}
|
||||
|
||||
// parseProperty attempts to decode a Property from the passed string. It
|
||||
// returns an error if the input is invalid according to the W3C Baggage
|
||||
// specification.
|
||||
func parseProperty(property string) (Property, error) {
|
||||
if property == "" {
|
||||
return newInvalidProperty(), nil
|
||||
}
|
||||
|
||||
match := propertyRe.FindStringSubmatch(property)
|
||||
if len(match) != 4 {
|
||||
return newInvalidProperty(), fmt.Errorf("%w: %q", errInvalidProperty, property)
|
||||
}
|
||||
|
||||
p := Property{hasData: true}
|
||||
if match[1] != "" {
|
||||
p.key = match[1]
|
||||
} else {
|
||||
p.key = match[2]
|
||||
p.value = match[3]
|
||||
p.hasValue = true
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// validate ensures p conforms to the W3C Baggage specification, returning an
|
||||
// error otherwise.
|
||||
func (p Property) validate() error {
|
||||
errFunc := func(err error) error {
|
||||
return fmt.Errorf("invalid property: %w", err)
|
||||
}
|
||||
|
||||
if !p.hasData {
|
||||
return errFunc(fmt.Errorf("%w: %q", errInvalidProperty, p))
|
||||
}
|
||||
|
||||
if !keyRe.MatchString(p.key) {
|
||||
return errFunc(fmt.Errorf("%w: %q", errInvalidKey, p.key))
|
||||
}
|
||||
if p.hasValue && !valueRe.MatchString(p.value) {
|
||||
return errFunc(fmt.Errorf("%w: %q", errInvalidValue, p.value))
|
||||
}
|
||||
if !p.hasValue && p.value != "" {
|
||||
return errFunc(errors.New("inconsistent value"))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Key returns the Property key.
|
||||
func (p Property) Key() string {
|
||||
return p.key
|
||||
}
|
||||
|
||||
// Value returns the Property value. Additionally, a boolean value is returned
|
||||
// indicating if the returned value is the empty if the Property has a value
|
||||
// that is empty or if the value is not set.
|
||||
func (p Property) Value() (string, bool) {
|
||||
return p.value, p.hasValue
|
||||
}
|
||||
|
||||
// String encodes Property into a string compliant with the W3C Baggage
|
||||
// specification.
|
||||
func (p Property) String() string {
|
||||
if p.hasValue {
|
||||
return fmt.Sprintf("%s%s%v", p.key, keyValueDelimiter, p.value)
|
||||
}
|
||||
return p.key
|
||||
}
|
||||
|
||||
type properties []Property
|
||||
|
||||
func fromInternalProperties(iProps []baggage.Property) properties {
|
||||
if len(iProps) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
props := make(properties, len(iProps))
|
||||
for i, p := range iProps {
|
||||
props[i] = Property{
|
||||
key: p.Key,
|
||||
value: p.Value,
|
||||
hasValue: p.HasValue,
|
||||
}
|
||||
}
|
||||
return props
|
||||
}
|
||||
|
||||
func (p properties) asInternal() []baggage.Property {
|
||||
if len(p) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
iProps := make([]baggage.Property, len(p))
|
||||
for i, prop := range p {
|
||||
iProps[i] = baggage.Property{
|
||||
Key: prop.key,
|
||||
Value: prop.value,
|
||||
HasValue: prop.hasValue,
|
||||
}
|
||||
}
|
||||
return iProps
|
||||
}
|
||||
|
||||
func (p properties) Copy() properties {
|
||||
if len(p) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
props := make(properties, len(p))
|
||||
copy(props, p)
|
||||
return props
|
||||
}
|
||||
|
||||
// validate ensures each Property in p conforms to the W3C Baggage
|
||||
// specification, returning an error otherwise.
|
||||
func (p properties) validate() error {
|
||||
for _, prop := range p {
|
||||
if err := prop.validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// String encodes properties into a string compliant with the W3C Baggage
|
||||
// specification.
|
||||
func (p properties) String() string {
|
||||
props := make([]string, len(p))
|
||||
for i, prop := range p {
|
||||
props[i] = prop.String()
|
||||
}
|
||||
return strings.Join(props, propertyDelimiter)
|
||||
}
|
||||
|
||||
// Member is a list-member of a baggage-string as defined by the W3C Baggage
|
||||
// specification.
|
||||
type Member struct {
|
||||
key, value string
|
||||
properties properties
|
||||
|
||||
// hasData indicates whether the created property contains data or not.
|
||||
// Properties that do not contain data are invalid with no other check
|
||||
// required.
|
||||
hasData bool
|
||||
}
|
||||
|
||||
// NewMember returns a new Member from the passed arguments. The key will be
|
||||
// used directly while the value will be url decoded after validation. An error
|
||||
// is returned if the created Member would be invalid according to the W3C
|
||||
// Baggage specification.
|
||||
func NewMember(key, value string, props ...Property) (Member, error) {
|
||||
m := Member{
|
||||
key: key,
|
||||
value: value,
|
||||
properties: properties(props).Copy(),
|
||||
hasData: true,
|
||||
}
|
||||
if err := m.validate(); err != nil {
|
||||
return newInvalidMember(), err
|
||||
}
|
||||
//// NOTE(anton): I don't think we need to unescape here
|
||||
// decodedValue, err := url.PathUnescape(value)
|
||||
// if err != nil {
|
||||
// return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, value)
|
||||
// }
|
||||
// m.value = decodedValue
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func newInvalidMember() Member {
|
||||
return Member{}
|
||||
}
|
||||
|
||||
// parseMember attempts to decode a Member from the passed string. It returns
|
||||
// an error if the input is invalid according to the W3C Baggage
|
||||
// specification.
|
||||
func parseMember(member string) (Member, error) {
|
||||
if n := len(member); n > maxBytesPerMembers {
|
||||
return newInvalidMember(), fmt.Errorf("%w: %d", errMemberBytes, n)
|
||||
}
|
||||
|
||||
var (
|
||||
key, value string
|
||||
props properties
|
||||
)
|
||||
|
||||
parts := strings.SplitN(member, propertyDelimiter, 2)
|
||||
switch len(parts) {
|
||||
case 2:
|
||||
// Parse the member properties.
|
||||
for _, pStr := range strings.Split(parts[1], propertyDelimiter) {
|
||||
p, err := parseProperty(pStr)
|
||||
if err != nil {
|
||||
return newInvalidMember(), err
|
||||
}
|
||||
props = append(props, p)
|
||||
}
|
||||
fallthrough
|
||||
case 1:
|
||||
// Parse the member key/value pair.
|
||||
|
||||
// Take into account a value can contain equal signs (=).
|
||||
kv := strings.SplitN(parts[0], keyValueDelimiter, 2)
|
||||
if len(kv) != 2 {
|
||||
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidMember, member)
|
||||
}
|
||||
// "Leading and trailing whitespaces are allowed but MUST be trimmed
|
||||
// when converting the header into a data structure."
|
||||
key = strings.TrimSpace(kv[0])
|
||||
value = strings.TrimSpace(kv[1])
|
||||
var err error
|
||||
if !keyRe.MatchString(key) {
|
||||
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidKey, key)
|
||||
}
|
||||
if !valueRe.MatchString(value) {
|
||||
return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, value)
|
||||
}
|
||||
decodedValue, err := url.PathUnescape(value)
|
||||
if err != nil {
|
||||
return newInvalidMember(), fmt.Errorf("%w: %q", err, value)
|
||||
}
|
||||
value = decodedValue
|
||||
default:
|
||||
// This should never happen unless a developer has changed the string
|
||||
// splitting somehow. Panic instead of failing silently and allowing
|
||||
// the bug to slip past the CI checks.
|
||||
panic("failed to parse baggage member")
|
||||
}
|
||||
|
||||
return Member{key: key, value: value, properties: props, hasData: true}, nil
|
||||
}
|
||||
|
||||
// validate ensures m conforms to the W3C Baggage specification.
|
||||
// A key is just an ASCII string, but a value must be URL encoded UTF-8,
|
||||
// returning an error otherwise.
|
||||
func (m Member) validate() error {
|
||||
if !m.hasData {
|
||||
return fmt.Errorf("%w: %q", errInvalidMember, m)
|
||||
}
|
||||
|
||||
if !keyRe.MatchString(m.key) {
|
||||
return fmt.Errorf("%w: %q", errInvalidKey, m.key)
|
||||
}
|
||||
//// NOTE(anton): IMO it's too early to validate the value here.
|
||||
// if !valueRe.MatchString(m.value) {
|
||||
// return fmt.Errorf("%w: %q", errInvalidValue, m.value)
|
||||
// }
|
||||
return m.properties.validate()
|
||||
}
|
||||
|
||||
// Key returns the Member key.
|
||||
func (m Member) Key() string { return m.key }
|
||||
|
||||
// Value returns the Member value.
|
||||
func (m Member) Value() string { return m.value }
|
||||
|
||||
// Properties returns a copy of the Member properties.
|
||||
func (m Member) Properties() []Property { return m.properties.Copy() }
|
||||
|
||||
// String encodes Member into a string compliant with the W3C Baggage
|
||||
// specification.
|
||||
func (m Member) String() string {
|
||||
// A key is just an ASCII string, but a value is URL encoded UTF-8.
|
||||
s := fmt.Sprintf("%s%s%s", m.key, keyValueDelimiter, percentEncodeValue(m.value))
|
||||
if len(m.properties) > 0 {
|
||||
s = fmt.Sprintf("%s%s%s", s, propertyDelimiter, m.properties.String())
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// percentEncodeValue encodes the baggage value, using percent-encoding for
|
||||
// disallowed octets.
|
||||
func percentEncodeValue(s string) string {
|
||||
const upperhex = "0123456789ABCDEF"
|
||||
var sb strings.Builder
|
||||
|
||||
for byteIndex, width := 0, 0; byteIndex < len(s); byteIndex += width {
|
||||
runeValue, w := utf8.DecodeRuneInString(s[byteIndex:])
|
||||
width = w
|
||||
char := string(runeValue)
|
||||
if valueRe.MatchString(char) && char != "%" {
|
||||
// The character is returned as is, no need to percent-encode
|
||||
sb.WriteString(char)
|
||||
} else {
|
||||
// We need to percent-encode each byte of the multi-octet character
|
||||
for j := 0; j < width; j++ {
|
||||
b := s[byteIndex+j]
|
||||
sb.WriteByte('%')
|
||||
// Bitwise operations are inspired by "net/url"
|
||||
sb.WriteByte(upperhex[b>>4])
|
||||
sb.WriteByte(upperhex[b&15])
|
||||
}
|
||||
}
|
||||
}
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// Baggage is a list of baggage members representing the baggage-string as
|
||||
// defined by the W3C Baggage specification.
|
||||
type Baggage struct { //nolint:golint
|
||||
list baggage.List
|
||||
}
|
||||
|
||||
// New returns a new valid Baggage. It returns an error if it results in a
|
||||
// Baggage exceeding limits set in that specification.
|
||||
//
|
||||
// It expects all the provided members to have already been validated.
|
||||
func New(members ...Member) (Baggage, error) {
|
||||
if len(members) == 0 {
|
||||
return Baggage{}, nil
|
||||
}
|
||||
|
||||
b := make(baggage.List)
|
||||
for _, m := range members {
|
||||
if !m.hasData {
|
||||
return Baggage{}, errInvalidMember
|
||||
}
|
||||
|
||||
// OpenTelemetry resolves duplicates by last-one-wins.
|
||||
b[m.key] = baggage.Item{
|
||||
Value: m.value,
|
||||
Properties: m.properties.asInternal(),
|
||||
}
|
||||
}
|
||||
|
||||
// Check member numbers after deduplication.
|
||||
if len(b) > maxMembers {
|
||||
return Baggage{}, errMemberNumber
|
||||
}
|
||||
|
||||
bag := Baggage{b}
|
||||
if n := len(bag.String()); n > maxBytesPerBaggageString {
|
||||
return Baggage{}, fmt.Errorf("%w: %d", errBaggageBytes, n)
|
||||
}
|
||||
|
||||
return bag, nil
|
||||
}
|
||||
|
||||
// Parse attempts to decode a baggage-string from the passed string. It
|
||||
// returns an error if the input is invalid according to the W3C Baggage
|
||||
// specification.
|
||||
//
|
||||
// If there are duplicate list-members contained in baggage, the last one
|
||||
// defined (reading left-to-right) will be the only one kept. This diverges
|
||||
// from the W3C Baggage specification which allows duplicate list-members, but
|
||||
// conforms to the OpenTelemetry Baggage specification.
|
||||
func Parse(bStr string) (Baggage, error) {
|
||||
if bStr == "" {
|
||||
return Baggage{}, nil
|
||||
}
|
||||
|
||||
if n := len(bStr); n > maxBytesPerBaggageString {
|
||||
return Baggage{}, fmt.Errorf("%w: %d", errBaggageBytes, n)
|
||||
}
|
||||
|
||||
b := make(baggage.List)
|
||||
for _, memberStr := range strings.Split(bStr, listDelimiter) {
|
||||
m, err := parseMember(memberStr)
|
||||
if err != nil {
|
||||
return Baggage{}, err
|
||||
}
|
||||
// OpenTelemetry resolves duplicates by last-one-wins.
|
||||
b[m.key] = baggage.Item{
|
||||
Value: m.value,
|
||||
Properties: m.properties.asInternal(),
|
||||
}
|
||||
}
|
||||
|
||||
// OpenTelemetry does not allow for duplicate list-members, but the W3C
|
||||
// specification does. Now that we have deduplicated, ensure the baggage
|
||||
// does not exceed list-member limits.
|
||||
if len(b) > maxMembers {
|
||||
return Baggage{}, errMemberNumber
|
||||
}
|
||||
|
||||
return Baggage{b}, nil
|
||||
}
|
||||
|
||||
// Member returns the baggage list-member identified by key.
|
||||
//
|
||||
// If there is no list-member matching the passed key the returned Member will
|
||||
// be a zero-value Member.
|
||||
// The returned member is not validated, as we assume the validation happened
|
||||
// when it was added to the Baggage.
|
||||
func (b Baggage) Member(key string) Member {
|
||||
v, ok := b.list[key]
|
||||
if !ok {
|
||||
// We do not need to worry about distinguishing between the situation
|
||||
// where a zero-valued Member is included in the Baggage because a
|
||||
// zero-valued Member is invalid according to the W3C Baggage
|
||||
// specification (it has an empty key).
|
||||
return newInvalidMember()
|
||||
}
|
||||
|
||||
return Member{
|
||||
key: key,
|
||||
value: v.Value,
|
||||
properties: fromInternalProperties(v.Properties),
|
||||
hasData: true,
|
||||
}
|
||||
}
|
||||
|
||||
// Members returns all the baggage list-members.
|
||||
// The order of the returned list-members does not have significance.
|
||||
//
|
||||
// The returned members are not validated, as we assume the validation happened
|
||||
// when they were added to the Baggage.
|
||||
func (b Baggage) Members() []Member {
|
||||
if len(b.list) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
members := make([]Member, 0, len(b.list))
|
||||
for k, v := range b.list {
|
||||
members = append(members, Member{
|
||||
key: k,
|
||||
value: v.Value,
|
||||
properties: fromInternalProperties(v.Properties),
|
||||
hasData: true,
|
||||
})
|
||||
}
|
||||
return members
|
||||
}
|
||||
|
||||
// SetMember returns a copy the Baggage with the member included. If the
|
||||
// baggage contains a Member with the same key the existing Member is
|
||||
// replaced.
|
||||
//
|
||||
// If member is invalid according to the W3C Baggage specification, an error
|
||||
// is returned with the original Baggage.
|
||||
func (b Baggage) SetMember(member Member) (Baggage, error) {
|
||||
if !member.hasData {
|
||||
return b, errInvalidMember
|
||||
}
|
||||
|
||||
n := len(b.list)
|
||||
if _, ok := b.list[member.key]; !ok {
|
||||
n++
|
||||
}
|
||||
list := make(baggage.List, n)
|
||||
|
||||
for k, v := range b.list {
|
||||
// Do not copy if we are just going to overwrite.
|
||||
if k == member.key {
|
||||
continue
|
||||
}
|
||||
list[k] = v
|
||||
}
|
||||
|
||||
list[member.key] = baggage.Item{
|
||||
Value: member.value,
|
||||
Properties: member.properties.asInternal(),
|
||||
}
|
||||
|
||||
return Baggage{list: list}, nil
|
||||
}
|
||||
|
||||
// DeleteMember returns a copy of the Baggage with the list-member identified
|
||||
// by key removed.
|
||||
func (b Baggage) DeleteMember(key string) Baggage {
|
||||
n := len(b.list)
|
||||
if _, ok := b.list[key]; ok {
|
||||
n--
|
||||
}
|
||||
list := make(baggage.List, n)
|
||||
|
||||
for k, v := range b.list {
|
||||
if k == key {
|
||||
continue
|
||||
}
|
||||
list[k] = v
|
||||
}
|
||||
|
||||
return Baggage{list: list}
|
||||
}
|
||||
|
||||
// Len returns the number of list-members in the Baggage.
|
||||
func (b Baggage) Len() int {
|
||||
return len(b.list)
|
||||
}
|
||||
|
||||
// String encodes Baggage into a string compliant with the W3C Baggage
|
||||
// specification. The returned string will be invalid if the Baggage contains
|
||||
// any invalid list-members.
|
||||
func (b Baggage) String() string {
|
||||
members := make([]string, 0, len(b.list))
|
||||
for k, v := range b.list {
|
||||
members = append(members, Member{
|
||||
key: k,
|
||||
value: v.Value,
|
||||
properties: fromInternalProperties(v.Properties),
|
||||
}.String())
|
||||
}
|
||||
return strings.Join(members, listDelimiter)
|
||||
}
|
||||
45
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/internal/baggage/baggage.go
generated
vendored
Normal file
45
vendor/github.com/getsentry/sentry-go/internal/otel/baggage/internal/baggage/baggage.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// Adapted from https://github.com/open-telemetry/opentelemetry-go/blob/c21b6b6bb31a2f74edd06e262f1690f3f6ea3d5c/internal/baggage/baggage.go
|
||||
//
|
||||
// Copyright The OpenTelemetry Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
/*
|
||||
Package baggage provides base types and functionality to store and retrieve
|
||||
baggage in Go context. This package exists because the OpenTracing bridge to
|
||||
OpenTelemetry needs to synchronize state whenever baggage for a context is
|
||||
modified and that context contains an OpenTracing span. If it were not for
|
||||
this need this package would not need to exist and the
|
||||
`go.opentelemetry.io/otel/baggage` package would be the singular place where
|
||||
W3C baggage is handled.
|
||||
*/
|
||||
package baggage
|
||||
|
||||
// List is the collection of baggage members. The W3C allows for duplicates,
|
||||
// but OpenTelemetry does not, therefore, this is represented as a map.
|
||||
type List map[string]Item
|
||||
|
||||
// Item is the value and metadata properties part of a list-member.
|
||||
type Item struct {
|
||||
Value string
|
||||
Properties []Property
|
||||
}
|
||||
|
||||
// Property is a metadata entry for a list-member.
|
||||
type Property struct {
|
||||
Key, Value string
|
||||
|
||||
// HasValue indicates if a zero-value value means the property does not
|
||||
// have a value or if it was the zero-value.
|
||||
HasValue bool
|
||||
}
|
||||
45
vendor/github.com/getsentry/sentry-go/internal/ratelimit/category.go
generated
vendored
Normal file
45
vendor/github.com/getsentry/sentry-go/internal/ratelimit/category.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/cases"
|
||||
"golang.org/x/text/language"
|
||||
)
|
||||
|
||||
// Reference:
|
||||
// https://github.com/getsentry/relay/blob/0424a2e017d193a93918053c90cdae9472d164bf/relay-common/src/constants.rs#L116-L127
|
||||
|
||||
// Category classifies supported payload types that can be ingested by Sentry
|
||||
// and, therefore, rate limited.
|
||||
type Category string
|
||||
|
||||
// Known rate limit categories. As a special case, the CategoryAll applies to
|
||||
// all known payload types.
|
||||
const (
|
||||
CategoryAll Category = ""
|
||||
CategoryError Category = "error"
|
||||
CategoryTransaction Category = "transaction"
|
||||
)
|
||||
|
||||
// knownCategories is the set of currently known categories. Other categories
|
||||
// are ignored for the purpose of rate-limiting.
|
||||
var knownCategories = map[Category]struct{}{
|
||||
CategoryAll: {},
|
||||
CategoryError: {},
|
||||
CategoryTransaction: {},
|
||||
}
|
||||
|
||||
// String returns the category formatted for debugging.
|
||||
func (c Category) String() string {
|
||||
if c == "" {
|
||||
return "CategoryAll"
|
||||
}
|
||||
|
||||
caser := cases.Title(language.English)
|
||||
rv := "Category"
|
||||
for _, w := range strings.Fields(string(c)) {
|
||||
rv += caser.String(w)
|
||||
}
|
||||
return rv
|
||||
}
|
||||
22
vendor/github.com/getsentry/sentry-go/internal/ratelimit/deadline.go
generated
vendored
Normal file
22
vendor/github.com/getsentry/sentry-go/internal/ratelimit/deadline.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package ratelimit
|
||||
|
||||
import "time"
|
||||
|
||||
// A Deadline is a time instant when a rate limit expires.
|
||||
type Deadline time.Time
|
||||
|
||||
// After reports whether the deadline d is after other.
|
||||
func (d Deadline) After(other Deadline) bool {
|
||||
return time.Time(d).After(time.Time(other))
|
||||
}
|
||||
|
||||
// Equal reports whether d and e represent the same deadline.
|
||||
func (d Deadline) Equal(e Deadline) bool {
|
||||
return time.Time(d).Equal(time.Time(e))
|
||||
}
|
||||
|
||||
// String returns the deadline formatted for debugging.
|
||||
func (d Deadline) String() string {
|
||||
// Like time.Time.String, but without the monotonic clock reading.
|
||||
return time.Time(d).Round(0).String()
|
||||
}
|
||||
3
vendor/github.com/getsentry/sentry-go/internal/ratelimit/doc.go
generated
vendored
Normal file
3
vendor/github.com/getsentry/sentry-go/internal/ratelimit/doc.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
// Package ratelimit provides tools to work with rate limits imposed by Sentry's
|
||||
// data ingestion pipeline.
|
||||
package ratelimit
|
||||
64
vendor/github.com/getsentry/sentry-go/internal/ratelimit/map.go
generated
vendored
Normal file
64
vendor/github.com/getsentry/sentry-go/internal/ratelimit/map.go
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Map maps categories to rate limit deadlines.
|
||||
//
|
||||
// A rate limit is in effect for a given category if either the category's
|
||||
// deadline or the deadline for the special CategoryAll has not yet expired.
|
||||
//
|
||||
// Use IsRateLimited to check whether a category is rate-limited.
|
||||
type Map map[Category]Deadline
|
||||
|
||||
// IsRateLimited returns true if the category is currently rate limited.
|
||||
func (m Map) IsRateLimited(c Category) bool {
|
||||
return m.isRateLimited(c, time.Now())
|
||||
}
|
||||
|
||||
func (m Map) isRateLimited(c Category, now time.Time) bool {
|
||||
return m.Deadline(c).After(Deadline(now))
|
||||
}
|
||||
|
||||
// Deadline returns the deadline when the rate limit for the given category or
|
||||
// the special CategoryAll expire, whichever is furthest into the future.
|
||||
func (m Map) Deadline(c Category) Deadline {
|
||||
categoryDeadline := m[c]
|
||||
allDeadline := m[CategoryAll]
|
||||
if categoryDeadline.After(allDeadline) {
|
||||
return categoryDeadline
|
||||
}
|
||||
return allDeadline
|
||||
}
|
||||
|
||||
// Merge merges the other map into m.
|
||||
//
|
||||
// If a category appears in both maps, the deadline that is furthest into the
|
||||
// future is preserved.
|
||||
func (m Map) Merge(other Map) {
|
||||
for c, d := range other {
|
||||
if d.After(m[c]) {
|
||||
m[c] = d
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// FromResponse returns a rate limit map from an HTTP response.
|
||||
func FromResponse(r *http.Response) Map {
|
||||
return fromResponse(r, time.Now())
|
||||
}
|
||||
|
||||
func fromResponse(r *http.Response, now time.Time) Map {
|
||||
s := r.Header.Get("X-Sentry-Rate-Limits")
|
||||
if s != "" {
|
||||
return parseXSentryRateLimits(s, now)
|
||||
}
|
||||
if r.StatusCode == http.StatusTooManyRequests {
|
||||
s := r.Header.Get("Retry-After")
|
||||
deadline, _ := parseRetryAfter(s, now)
|
||||
return Map{CategoryAll: deadline}
|
||||
}
|
||||
return Map{}
|
||||
}
|
||||
76
vendor/github.com/getsentry/sentry-go/internal/ratelimit/rate_limits.go
generated
vendored
Normal file
76
vendor/github.com/getsentry/sentry-go/internal/ratelimit/rate_limits.go
generated
vendored
Normal file
@@ -0,0 +1,76 @@
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"math"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
var errInvalidXSRLRetryAfter = errors.New("invalid retry-after value")
|
||||
|
||||
// parseXSentryRateLimits returns a RateLimits map by parsing an input string in
|
||||
// the format of the X-Sentry-Rate-Limits header.
|
||||
//
|
||||
// Example
|
||||
//
|
||||
// X-Sentry-Rate-Limits: 60:transaction, 2700:default;error;security
|
||||
//
|
||||
// This will rate limit transactions for the next 60 seconds and errors for the
|
||||
// next 2700 seconds.
|
||||
//
|
||||
// Limits for unknown categories are ignored.
|
||||
func parseXSentryRateLimits(s string, now time.Time) Map {
|
||||
// https://github.com/getsentry/relay/blob/0424a2e017d193a93918053c90cdae9472d164bf/relay-server/src/utils/rate_limits.rs#L44-L82
|
||||
m := make(Map, len(knownCategories))
|
||||
for _, limit := range strings.Split(s, ",") {
|
||||
limit = strings.TrimSpace(limit)
|
||||
if limit == "" {
|
||||
continue
|
||||
}
|
||||
components := strings.Split(limit, ":")
|
||||
if len(components) == 0 {
|
||||
continue
|
||||
}
|
||||
retryAfter, err := parseXSRLRetryAfter(strings.TrimSpace(components[0]), now)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
categories := ""
|
||||
if len(components) > 1 {
|
||||
categories = components[1]
|
||||
}
|
||||
for _, category := range strings.Split(categories, ";") {
|
||||
c := Category(strings.ToLower(strings.TrimSpace(category)))
|
||||
if _, ok := knownCategories[c]; !ok {
|
||||
// skip unknown categories, keep m small
|
||||
continue
|
||||
}
|
||||
// always keep the deadline furthest into the future
|
||||
if retryAfter.After(m[c]) {
|
||||
m[c] = retryAfter
|
||||
}
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// parseXSRLRetryAfter parses a string into a retry-after rate limit deadline.
|
||||
//
|
||||
// Valid input is a number, possibly signed and possibly floating-point,
|
||||
// indicating the number of seconds to wait before sending another request.
|
||||
// Negative values are treated as zero. Fractional values are rounded to the
|
||||
// next integer.
|
||||
func parseXSRLRetryAfter(s string, now time.Time) (Deadline, error) {
|
||||
// https://github.com/getsentry/relay/blob/0424a2e017d193a93918053c90cdae9472d164bf/relay-quotas/src/rate_limit.rs#L88-L96
|
||||
f, err := strconv.ParseFloat(s, 64)
|
||||
if err != nil {
|
||||
return Deadline{}, errInvalidXSRLRetryAfter
|
||||
}
|
||||
d := time.Duration(math.Ceil(math.Max(f, 0.0))) * time.Second
|
||||
if d < 0 {
|
||||
d = 0
|
||||
}
|
||||
return Deadline(now.Add(d)), nil
|
||||
}
|
||||
40
vendor/github.com/getsentry/sentry-go/internal/ratelimit/retry_after.go
generated
vendored
Normal file
40
vendor/github.com/getsentry/sentry-go/internal/ratelimit/retry_after.go
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
const defaultRetryAfter = 1 * time.Minute
|
||||
|
||||
var errInvalidRetryAfter = errors.New("invalid input")
|
||||
|
||||
// parseRetryAfter parses a string s as in the standard Retry-After HTTP header
|
||||
// and returns a deadline until when requests are rate limited and therefore new
|
||||
// requests should not be sent. The input may be either a date or a non-negative
|
||||
// integer number of seconds.
|
||||
//
|
||||
// See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
|
||||
//
|
||||
// parseRetryAfter always returns a usable deadline, even in case of an error.
|
||||
//
|
||||
// This is the original rate limiting mechanism used by Sentry, superseeded by
|
||||
// the X-Sentry-Rate-Limits response header.
|
||||
func parseRetryAfter(s string, now time.Time) (Deadline, error) {
|
||||
if s == "" {
|
||||
goto invalid
|
||||
}
|
||||
if n, err := strconv.Atoi(s); err == nil {
|
||||
if n < 0 {
|
||||
goto invalid
|
||||
}
|
||||
d := time.Duration(n) * time.Second
|
||||
return Deadline(now.Add(d)), nil
|
||||
}
|
||||
if date, err := time.Parse(time.RFC1123, s); err == nil {
|
||||
return Deadline(date), nil
|
||||
}
|
||||
invalid:
|
||||
return Deadline(now.Add(defaultRetryAfter)), errInvalidRetryAfter
|
||||
}
|
||||
90
vendor/github.com/getsentry/sentry-go/propagation_context.go
generated
vendored
Normal file
90
vendor/github.com/getsentry/sentry-go/propagation_context.go
generated
vendored
Normal file
@@ -0,0 +1,90 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
)
|
||||
|
||||
type PropagationContext struct {
|
||||
TraceID TraceID `json:"trace_id"`
|
||||
SpanID SpanID `json:"span_id"`
|
||||
ParentSpanID SpanID `json:"parent_span_id"`
|
||||
DynamicSamplingContext DynamicSamplingContext `json:"-"`
|
||||
}
|
||||
|
||||
func (p PropagationContext) MarshalJSON() ([]byte, error) {
|
||||
type propagationContext PropagationContext
|
||||
var parentSpanID string
|
||||
if p.ParentSpanID != zeroSpanID {
|
||||
parentSpanID = p.ParentSpanID.String()
|
||||
}
|
||||
return json.Marshal(struct {
|
||||
*propagationContext
|
||||
ParentSpanID string `json:"parent_span_id,omitempty"`
|
||||
}{
|
||||
propagationContext: (*propagationContext)(&p),
|
||||
ParentSpanID: parentSpanID,
|
||||
})
|
||||
}
|
||||
|
||||
func (p PropagationContext) Map() map[string]interface{} {
|
||||
m := map[string]interface{}{
|
||||
"trace_id": p.TraceID,
|
||||
"span_id": p.SpanID,
|
||||
}
|
||||
|
||||
if p.ParentSpanID != zeroSpanID {
|
||||
m["parent_span_id"] = p.ParentSpanID
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
func NewPropagationContext() PropagationContext {
|
||||
p := PropagationContext{}
|
||||
|
||||
if _, err := rand.Read(p.TraceID[:]); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if _, err := rand.Read(p.SpanID[:]); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
func PropagationContextFromHeaders(trace, baggage string) (PropagationContext, error) {
|
||||
p := NewPropagationContext()
|
||||
|
||||
if _, err := rand.Read(p.SpanID[:]); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
hasTrace := false
|
||||
if trace != "" {
|
||||
if tpc, valid := ParseTraceParentContext([]byte(trace)); valid {
|
||||
hasTrace = true
|
||||
p.TraceID = tpc.TraceID
|
||||
p.ParentSpanID = tpc.ParentSpanID
|
||||
}
|
||||
}
|
||||
|
||||
if baggage != "" {
|
||||
dsc, err := DynamicSamplingContextFromHeader([]byte(baggage))
|
||||
if err != nil {
|
||||
return PropagationContext{}, err
|
||||
}
|
||||
p.DynamicSamplingContext = dsc
|
||||
}
|
||||
|
||||
// In case a sentry-trace header is present but there are no sentry-related
|
||||
// values in the baggage, create an empty, frozen DynamicSamplingContext.
|
||||
if hasTrace && !p.DynamicSamplingContext.HasEntries() {
|
||||
p.DynamicSamplingContext = DynamicSamplingContext{
|
||||
Frozen: true,
|
||||
}
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
486
vendor/github.com/getsentry/sentry-go/scope.go
generated
vendored
Normal file
486
vendor/github.com/getsentry/sentry-go/scope.go
generated
vendored
Normal file
@@ -0,0 +1,486 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Scope holds contextual data for the current scope.
|
||||
//
|
||||
// The scope is an object that can cloned efficiently and stores data that is
|
||||
// locally relevant to an event. For instance the scope will hold recorded
|
||||
// breadcrumbs and similar information.
|
||||
//
|
||||
// The scope can be interacted with in two ways. First, the scope is routinely
|
||||
// updated with information by functions such as AddBreadcrumb which will modify
|
||||
// the current scope. Second, the current scope can be configured through the
|
||||
// ConfigureScope function or Hub method of the same name.
|
||||
//
|
||||
// The scope is meant to be modified but not inspected directly. When preparing
|
||||
// an event for reporting, the current client adds information from the current
|
||||
// scope into the event.
|
||||
type Scope struct {
|
||||
mu sync.RWMutex
|
||||
breadcrumbs []*Breadcrumb
|
||||
attachments []*Attachment
|
||||
user User
|
||||
tags map[string]string
|
||||
contexts map[string]Context
|
||||
extra map[string]interface{}
|
||||
fingerprint []string
|
||||
level Level
|
||||
request *http.Request
|
||||
// requestBody holds a reference to the original request.Body.
|
||||
requestBody interface {
|
||||
// Bytes returns bytes from the original body, lazily buffered as the
|
||||
// original body is read.
|
||||
Bytes() []byte
|
||||
// Overflow returns true if the body is larger than the maximum buffer
|
||||
// size.
|
||||
Overflow() bool
|
||||
}
|
||||
eventProcessors []EventProcessor
|
||||
|
||||
propagationContext PropagationContext
|
||||
span *Span
|
||||
}
|
||||
|
||||
// NewScope creates a new Scope.
|
||||
func NewScope() *Scope {
|
||||
return &Scope{
|
||||
breadcrumbs: make([]*Breadcrumb, 0),
|
||||
attachments: make([]*Attachment, 0),
|
||||
tags: make(map[string]string),
|
||||
contexts: make(map[string]Context),
|
||||
extra: make(map[string]interface{}),
|
||||
fingerprint: make([]string, 0),
|
||||
propagationContext: NewPropagationContext(),
|
||||
}
|
||||
}
|
||||
|
||||
// AddBreadcrumb adds new breadcrumb to the current scope
|
||||
// and optionally throws the old one if limit is reached.
|
||||
func (scope *Scope) AddBreadcrumb(breadcrumb *Breadcrumb, limit int) {
|
||||
if breadcrumb.Timestamp.IsZero() {
|
||||
breadcrumb.Timestamp = time.Now()
|
||||
}
|
||||
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.breadcrumbs = append(scope.breadcrumbs, breadcrumb)
|
||||
if len(scope.breadcrumbs) > limit {
|
||||
scope.breadcrumbs = scope.breadcrumbs[1 : limit+1]
|
||||
}
|
||||
}
|
||||
|
||||
// ClearBreadcrumbs clears all breadcrumbs from the current scope.
|
||||
func (scope *Scope) ClearBreadcrumbs() {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.breadcrumbs = []*Breadcrumb{}
|
||||
}
|
||||
|
||||
// AddAttachment adds new attachment to the current scope.
|
||||
func (scope *Scope) AddAttachment(attachment *Attachment) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.attachments = append(scope.attachments, attachment)
|
||||
}
|
||||
|
||||
// ClearAttachments clears all attachments from the current scope.
|
||||
func (scope *Scope) ClearAttachments() {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.attachments = []*Attachment{}
|
||||
}
|
||||
|
||||
// SetUser sets the user for the current scope.
|
||||
func (scope *Scope) SetUser(user User) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.user = user
|
||||
}
|
||||
|
||||
// SetRequest sets the request for the current scope.
|
||||
func (scope *Scope) SetRequest(r *http.Request) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.request = r
|
||||
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Don't buffer request body if we know it is oversized.
|
||||
if r.ContentLength > maxRequestBodyBytes {
|
||||
return
|
||||
}
|
||||
// Don't buffer if there is no body.
|
||||
if r.Body == nil || r.Body == http.NoBody {
|
||||
return
|
||||
}
|
||||
buf := &limitedBuffer{Capacity: maxRequestBodyBytes}
|
||||
r.Body = readCloser{
|
||||
Reader: io.TeeReader(r.Body, buf),
|
||||
Closer: r.Body,
|
||||
}
|
||||
scope.requestBody = buf
|
||||
}
|
||||
|
||||
// SetRequestBody sets the request body for the current scope.
|
||||
//
|
||||
// This method should only be called when the body bytes are already available
|
||||
// in memory. Typically, the request body is buffered lazily from the
|
||||
// Request.Body from SetRequest.
|
||||
func (scope *Scope) SetRequestBody(b []byte) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
capacity := maxRequestBodyBytes
|
||||
overflow := false
|
||||
if len(b) > capacity {
|
||||
overflow = true
|
||||
b = b[:capacity]
|
||||
}
|
||||
scope.requestBody = &limitedBuffer{
|
||||
Capacity: capacity,
|
||||
Buffer: *bytes.NewBuffer(b),
|
||||
overflow: overflow,
|
||||
}
|
||||
}
|
||||
|
||||
// maxRequestBodyBytes is the default maximum request body size to send to
|
||||
// Sentry.
|
||||
const maxRequestBodyBytes = 10 * 1024
|
||||
|
||||
// A limitedBuffer is like a bytes.Buffer, but limited to store at most Capacity
|
||||
// bytes. Any writes past the capacity are silently discarded, similar to
|
||||
// io.Discard.
|
||||
type limitedBuffer struct {
|
||||
Capacity int
|
||||
|
||||
bytes.Buffer
|
||||
overflow bool
|
||||
}
|
||||
|
||||
// Write implements io.Writer.
|
||||
func (b *limitedBuffer) Write(p []byte) (n int, err error) {
|
||||
// Silently ignore writes after overflow.
|
||||
if b.overflow {
|
||||
return len(p), nil
|
||||
}
|
||||
left := b.Capacity - b.Len()
|
||||
if left < 0 {
|
||||
left = 0
|
||||
}
|
||||
if len(p) > left {
|
||||
b.overflow = true
|
||||
p = p[:left]
|
||||
}
|
||||
return b.Buffer.Write(p)
|
||||
}
|
||||
|
||||
// Overflow returns true if the limitedBuffer discarded bytes written to it.
|
||||
func (b *limitedBuffer) Overflow() bool {
|
||||
return b.overflow
|
||||
}
|
||||
|
||||
// readCloser combines an io.Reader and an io.Closer to implement io.ReadCloser.
|
||||
type readCloser struct {
|
||||
io.Reader
|
||||
io.Closer
|
||||
}
|
||||
|
||||
// SetTag adds a tag to the current scope.
|
||||
func (scope *Scope) SetTag(key, value string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.tags[key] = value
|
||||
}
|
||||
|
||||
// SetTags assigns multiple tags to the current scope.
|
||||
func (scope *Scope) SetTags(tags map[string]string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
for k, v := range tags {
|
||||
scope.tags[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveTag removes a tag from the current scope.
|
||||
func (scope *Scope) RemoveTag(key string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
delete(scope.tags, key)
|
||||
}
|
||||
|
||||
// SetContext adds a context to the current scope.
|
||||
func (scope *Scope) SetContext(key string, value Context) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.contexts[key] = value
|
||||
}
|
||||
|
||||
// SetContexts assigns multiple contexts to the current scope.
|
||||
func (scope *Scope) SetContexts(contexts map[string]Context) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
for k, v := range contexts {
|
||||
scope.contexts[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveContext removes a context from the current scope.
|
||||
func (scope *Scope) RemoveContext(key string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
delete(scope.contexts, key)
|
||||
}
|
||||
|
||||
// SetExtra adds an extra to the current scope.
|
||||
func (scope *Scope) SetExtra(key string, value interface{}) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.extra[key] = value
|
||||
}
|
||||
|
||||
// SetExtras assigns multiple extras to the current scope.
|
||||
func (scope *Scope) SetExtras(extra map[string]interface{}) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
for k, v := range extra {
|
||||
scope.extra[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveExtra removes a extra from the current scope.
|
||||
func (scope *Scope) RemoveExtra(key string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
delete(scope.extra, key)
|
||||
}
|
||||
|
||||
// SetFingerprint sets new fingerprint for the current scope.
|
||||
func (scope *Scope) SetFingerprint(fingerprint []string) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.fingerprint = fingerprint
|
||||
}
|
||||
|
||||
// SetLevel sets new level for the current scope.
|
||||
func (scope *Scope) SetLevel(level Level) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.level = level
|
||||
}
|
||||
|
||||
// SetPropagationContext sets the propagation context for the current scope.
|
||||
func (scope *Scope) SetPropagationContext(propagationContext PropagationContext) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.propagationContext = propagationContext
|
||||
}
|
||||
|
||||
// SetSpan sets a span for the current scope.
|
||||
func (scope *Scope) SetSpan(span *Span) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.span = span
|
||||
}
|
||||
|
||||
// Clone returns a copy of the current scope with all data copied over.
|
||||
func (scope *Scope) Clone() *Scope {
|
||||
scope.mu.RLock()
|
||||
defer scope.mu.RUnlock()
|
||||
|
||||
clone := NewScope()
|
||||
clone.user = scope.user
|
||||
clone.breadcrumbs = make([]*Breadcrumb, len(scope.breadcrumbs))
|
||||
copy(clone.breadcrumbs, scope.breadcrumbs)
|
||||
clone.attachments = make([]*Attachment, len(scope.attachments))
|
||||
copy(clone.attachments, scope.attachments)
|
||||
for key, value := range scope.tags {
|
||||
clone.tags[key] = value
|
||||
}
|
||||
for key, value := range scope.contexts {
|
||||
clone.contexts[key] = cloneContext(value)
|
||||
}
|
||||
for key, value := range scope.extra {
|
||||
clone.extra[key] = value
|
||||
}
|
||||
clone.fingerprint = make([]string, len(scope.fingerprint))
|
||||
copy(clone.fingerprint, scope.fingerprint)
|
||||
clone.level = scope.level
|
||||
clone.request = scope.request
|
||||
clone.requestBody = scope.requestBody
|
||||
clone.eventProcessors = scope.eventProcessors
|
||||
clone.propagationContext = scope.propagationContext
|
||||
clone.span = scope.span
|
||||
return clone
|
||||
}
|
||||
|
||||
// Clear removes the data from the current scope. Not safe for concurrent use.
|
||||
func (scope *Scope) Clear() {
|
||||
*scope = *NewScope()
|
||||
}
|
||||
|
||||
// AddEventProcessor adds an event processor to the current scope.
|
||||
func (scope *Scope) AddEventProcessor(processor EventProcessor) {
|
||||
scope.mu.Lock()
|
||||
defer scope.mu.Unlock()
|
||||
|
||||
scope.eventProcessors = append(scope.eventProcessors, processor)
|
||||
}
|
||||
|
||||
// ApplyToEvent takes the data from the current scope and attaches it to the event.
|
||||
func (scope *Scope) ApplyToEvent(event *Event, hint *EventHint, client *Client) *Event {
|
||||
scope.mu.RLock()
|
||||
defer scope.mu.RUnlock()
|
||||
|
||||
if len(scope.breadcrumbs) > 0 {
|
||||
event.Breadcrumbs = append(event.Breadcrumbs, scope.breadcrumbs...)
|
||||
}
|
||||
|
||||
if len(scope.attachments) > 0 {
|
||||
event.Attachments = append(event.Attachments, scope.attachments...)
|
||||
}
|
||||
|
||||
if len(scope.tags) > 0 {
|
||||
if event.Tags == nil {
|
||||
event.Tags = make(map[string]string, len(scope.tags))
|
||||
}
|
||||
|
||||
for key, value := range scope.tags {
|
||||
event.Tags[key] = value
|
||||
}
|
||||
}
|
||||
|
||||
if len(scope.contexts) > 0 {
|
||||
if event.Contexts == nil {
|
||||
event.Contexts = make(map[string]Context)
|
||||
}
|
||||
|
||||
for key, value := range scope.contexts {
|
||||
if key == "trace" && event.Type == transactionType {
|
||||
// Do not override trace context of
|
||||
// transactions, otherwise it breaks the
|
||||
// transaction event representation.
|
||||
// For error events, the trace context is used
|
||||
// to link errors and traces/spans in Sentry.
|
||||
continue
|
||||
}
|
||||
|
||||
// Ensure we are not overwriting event fields
|
||||
if _, ok := event.Contexts[key]; !ok {
|
||||
event.Contexts[key] = cloneContext(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if event.Contexts == nil {
|
||||
event.Contexts = make(map[string]Context)
|
||||
}
|
||||
|
||||
if scope.span != nil {
|
||||
if _, ok := event.Contexts["trace"]; !ok {
|
||||
event.Contexts["trace"] = scope.span.traceContext().Map()
|
||||
}
|
||||
|
||||
transaction := scope.span.GetTransaction()
|
||||
if transaction != nil {
|
||||
event.sdkMetaData.dsc = DynamicSamplingContextFromTransaction(transaction)
|
||||
}
|
||||
} else {
|
||||
event.Contexts["trace"] = scope.propagationContext.Map()
|
||||
|
||||
dsc := scope.propagationContext.DynamicSamplingContext
|
||||
if !dsc.HasEntries() && client != nil {
|
||||
dsc = DynamicSamplingContextFromScope(scope, client)
|
||||
}
|
||||
event.sdkMetaData.dsc = dsc
|
||||
}
|
||||
|
||||
if len(scope.extra) > 0 {
|
||||
if event.Extra == nil {
|
||||
event.Extra = make(map[string]interface{}, len(scope.extra))
|
||||
}
|
||||
|
||||
for key, value := range scope.extra {
|
||||
event.Extra[key] = value
|
||||
}
|
||||
}
|
||||
|
||||
if event.User.IsEmpty() {
|
||||
event.User = scope.user
|
||||
}
|
||||
|
||||
if len(event.Fingerprint) == 0 {
|
||||
event.Fingerprint = append(event.Fingerprint, scope.fingerprint...)
|
||||
}
|
||||
|
||||
if scope.level != "" {
|
||||
event.Level = scope.level
|
||||
}
|
||||
|
||||
if event.Request == nil && scope.request != nil {
|
||||
event.Request = NewRequest(scope.request)
|
||||
// NOTE: The SDK does not attempt to send partial request body data.
|
||||
//
|
||||
// The reason being that Sentry's ingest pipeline and UI are optimized
|
||||
// to show structured data. Additionally, tooling around PII scrubbing
|
||||
// relies on structured data; truncated request bodies would create
|
||||
// invalid payloads that are more prone to leaking PII data.
|
||||
//
|
||||
// Users can still send more data along their events if they want to,
|
||||
// for example using Event.Extra.
|
||||
if scope.requestBody != nil && !scope.requestBody.Overflow() {
|
||||
event.Request.Data = string(scope.requestBody.Bytes())
|
||||
}
|
||||
}
|
||||
|
||||
for _, processor := range scope.eventProcessors {
|
||||
id := event.EventID
|
||||
event = processor(event, hint)
|
||||
if event == nil {
|
||||
Logger.Printf("Event dropped by one of the Scope EventProcessors: %s\n", id)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return event
|
||||
}
|
||||
|
||||
// cloneContext returns a new context with keys and values copied from the passed one.
|
||||
//
|
||||
// Note: a new Context (map) is returned, but the function does NOT do
|
||||
// a proper deep copy: if some context values are pointer types (e.g. maps),
|
||||
// they won't be properly copied.
|
||||
func cloneContext(c Context) Context {
|
||||
res := make(Context, len(c))
|
||||
for k, v := range c {
|
||||
res[k] = v
|
||||
}
|
||||
return res
|
||||
}
|
||||
132
vendor/github.com/getsentry/sentry-go/sentry.go
generated
vendored
Normal file
132
vendor/github.com/getsentry/sentry-go/sentry.go
generated
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
)
|
||||
|
||||
// The version of the SDK.
|
||||
const SDKVersion = "0.31.1"
|
||||
|
||||
// apiVersion is the minimum version of the Sentry API compatible with the
|
||||
// sentry-go SDK.
|
||||
const apiVersion = "7"
|
||||
|
||||
// Init initializes the SDK with options. The returned error is non-nil if
|
||||
// options is invalid, for instance if a malformed DSN is provided.
|
||||
func Init(options ClientOptions) error {
|
||||
hub := CurrentHub()
|
||||
client, err := NewClient(options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hub.BindClient(client)
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddBreadcrumb records a new breadcrumb.
|
||||
//
|
||||
// The total number of breadcrumbs that can be recorded are limited by the
|
||||
// configuration on the client.
|
||||
func AddBreadcrumb(breadcrumb *Breadcrumb) {
|
||||
hub := CurrentHub()
|
||||
hub.AddBreadcrumb(breadcrumb, nil)
|
||||
}
|
||||
|
||||
// CaptureMessage captures an arbitrary message.
|
||||
func CaptureMessage(message string) *EventID {
|
||||
hub := CurrentHub()
|
||||
return hub.CaptureMessage(message)
|
||||
}
|
||||
|
||||
// CaptureException captures an error.
|
||||
func CaptureException(exception error) *EventID {
|
||||
hub := CurrentHub()
|
||||
return hub.CaptureException(exception)
|
||||
}
|
||||
|
||||
// CaptureCheckIn captures a (cron) monitor check-in.
|
||||
func CaptureCheckIn(checkIn *CheckIn, monitorConfig *MonitorConfig) *EventID {
|
||||
hub := CurrentHub()
|
||||
return hub.CaptureCheckIn(checkIn, monitorConfig)
|
||||
}
|
||||
|
||||
// CaptureEvent captures an event on the currently active client if any.
|
||||
//
|
||||
// The event must already be assembled. Typically code would instead use
|
||||
// the utility methods like CaptureException. The return value is the
|
||||
// event ID. In case Sentry is disabled or event was dropped, the return value will be nil.
|
||||
func CaptureEvent(event *Event) *EventID {
|
||||
hub := CurrentHub()
|
||||
return hub.CaptureEvent(event)
|
||||
}
|
||||
|
||||
// Recover captures a panic.
|
||||
func Recover() *EventID {
|
||||
if err := recover(); err != nil {
|
||||
hub := CurrentHub()
|
||||
return hub.Recover(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RecoverWithContext captures a panic and passes relevant context object.
|
||||
func RecoverWithContext(ctx context.Context) *EventID {
|
||||
err := recover()
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
hub := GetHubFromContext(ctx)
|
||||
if hub == nil {
|
||||
hub = CurrentHub()
|
||||
}
|
||||
|
||||
return hub.RecoverWithContext(ctx, err)
|
||||
}
|
||||
|
||||
// WithScope is a shorthand for CurrentHub().WithScope.
|
||||
func WithScope(f func(scope *Scope)) {
|
||||
hub := CurrentHub()
|
||||
hub.WithScope(f)
|
||||
}
|
||||
|
||||
// ConfigureScope is a shorthand for CurrentHub().ConfigureScope.
|
||||
func ConfigureScope(f func(scope *Scope)) {
|
||||
hub := CurrentHub()
|
||||
hub.ConfigureScope(f)
|
||||
}
|
||||
|
||||
// PushScope is a shorthand for CurrentHub().PushScope.
|
||||
func PushScope() {
|
||||
hub := CurrentHub()
|
||||
hub.PushScope()
|
||||
}
|
||||
|
||||
// PopScope is a shorthand for CurrentHub().PopScope.
|
||||
func PopScope() {
|
||||
hub := CurrentHub()
|
||||
hub.PopScope()
|
||||
}
|
||||
|
||||
// Flush waits until the underlying Transport sends any buffered events to the
|
||||
// Sentry server, blocking for at most the given timeout. It returns false if
|
||||
// the timeout was reached. In that case, some events may not have been sent.
|
||||
//
|
||||
// Flush should be called before terminating the program to avoid
|
||||
// unintentionally dropping events.
|
||||
//
|
||||
// Do not call Flush indiscriminately after every call to CaptureEvent,
|
||||
// CaptureException or CaptureMessage. Instead, to have the SDK send events over
|
||||
// the network synchronously, configure it to use the HTTPSyncTransport in the
|
||||
// call to Init.
|
||||
func Flush(timeout time.Duration) bool {
|
||||
hub := CurrentHub()
|
||||
return hub.Flush(timeout)
|
||||
}
|
||||
|
||||
// LastEventID returns an ID of last captured event.
|
||||
func LastEventID() EventID {
|
||||
hub := CurrentHub()
|
||||
return hub.LastEventID()
|
||||
}
|
||||
70
vendor/github.com/getsentry/sentry-go/sourcereader.go
generated
vendored
Normal file
70
vendor/github.com/getsentry/sentry-go/sourcereader.go
generated
vendored
Normal file
@@ -0,0 +1,70 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type sourceReader struct {
|
||||
mu sync.Mutex
|
||||
cache map[string][][]byte
|
||||
}
|
||||
|
||||
func newSourceReader() sourceReader {
|
||||
return sourceReader{
|
||||
cache: make(map[string][][]byte),
|
||||
}
|
||||
}
|
||||
|
||||
func (sr *sourceReader) readContextLines(filename string, line, context int) ([][]byte, int) {
|
||||
sr.mu.Lock()
|
||||
defer sr.mu.Unlock()
|
||||
|
||||
lines, ok := sr.cache[filename]
|
||||
|
||||
if !ok {
|
||||
data, err := os.ReadFile(filename)
|
||||
if err != nil {
|
||||
sr.cache[filename] = nil
|
||||
return nil, 0
|
||||
}
|
||||
lines = bytes.Split(data, []byte{'\n'})
|
||||
sr.cache[filename] = lines
|
||||
}
|
||||
|
||||
return sr.calculateContextLines(lines, line, context)
|
||||
}
|
||||
|
||||
func (sr *sourceReader) calculateContextLines(lines [][]byte, line, context int) ([][]byte, int) {
|
||||
// Stacktrace lines are 1-indexed, slices are 0-indexed
|
||||
line--
|
||||
|
||||
// contextLine points to a line that caused an issue itself, in relation to
|
||||
// returned slice.
|
||||
contextLine := context
|
||||
|
||||
if lines == nil || line >= len(lines) || line < 0 {
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
if context < 0 {
|
||||
context = 0
|
||||
contextLine = 0
|
||||
}
|
||||
|
||||
start := line - context
|
||||
|
||||
if start < 0 {
|
||||
contextLine += start
|
||||
start = 0
|
||||
}
|
||||
|
||||
end := line + context + 1
|
||||
|
||||
if end > len(lines) {
|
||||
end = len(lines)
|
||||
}
|
||||
|
||||
return lines[start:end], contextLine
|
||||
}
|
||||
56
vendor/github.com/getsentry/sentry-go/span_recorder.go
generated
vendored
Normal file
56
vendor/github.com/getsentry/sentry-go/span_recorder.go
generated
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// A spanRecorder stores a span tree that makes up a transaction. Safe for
|
||||
// concurrent use. It is okay to add child spans from multiple goroutines.
|
||||
type spanRecorder struct {
|
||||
mu sync.Mutex
|
||||
spans []*Span
|
||||
overflowOnce sync.Once
|
||||
}
|
||||
|
||||
// record stores a span. The first stored span is assumed to be the root of a
|
||||
// span tree.
|
||||
func (r *spanRecorder) record(s *Span) {
|
||||
maxSpans := defaultMaxSpans
|
||||
if client := CurrentHub().Client(); client != nil {
|
||||
maxSpans = client.options.MaxSpans
|
||||
}
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
if len(r.spans) >= maxSpans {
|
||||
r.overflowOnce.Do(func() {
|
||||
root := r.spans[0]
|
||||
Logger.Printf("Too many spans: dropping spans from transaction with TraceID=%s SpanID=%s limit=%d",
|
||||
root.TraceID, root.SpanID, maxSpans)
|
||||
})
|
||||
// TODO(tracing): mark the transaction event in some way to
|
||||
// communicate that spans were dropped.
|
||||
return
|
||||
}
|
||||
r.spans = append(r.spans, s)
|
||||
}
|
||||
|
||||
// root returns the first recorded span. Returns nil if none have been recorded.
|
||||
func (r *spanRecorder) root() *Span {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
if len(r.spans) == 0 {
|
||||
return nil
|
||||
}
|
||||
return r.spans[0]
|
||||
}
|
||||
|
||||
// children returns a list of all recorded spans, except the root. Returns nil
|
||||
// if there are no children.
|
||||
func (r *spanRecorder) children() []*Span {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
if len(r.spans) < 2 {
|
||||
return nil
|
||||
}
|
||||
return r.spans[1:]
|
||||
}
|
||||
407
vendor/github.com/getsentry/sentry-go/stacktrace.go
generated
vendored
Normal file
407
vendor/github.com/getsentry/sentry-go/stacktrace.go
generated
vendored
Normal file
@@ -0,0 +1,407 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"go/build"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"slices"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const unknown string = "unknown"
|
||||
|
||||
// The module download is split into two parts: downloading the go.mod and downloading the actual code.
|
||||
// If you have dependencies only needed for tests, then they will show up in your go.mod,
|
||||
// and go get will download their go.mods, but it will not download their code.
|
||||
// The test-only dependencies get downloaded only when you need it, such as the first time you run go test.
|
||||
//
|
||||
// https://github.com/golang/go/issues/26913#issuecomment-411976222
|
||||
|
||||
// Stacktrace holds information about the frames of the stack.
|
||||
type Stacktrace struct {
|
||||
Frames []Frame `json:"frames,omitempty"`
|
||||
FramesOmitted []uint `json:"frames_omitted,omitempty"`
|
||||
}
|
||||
|
||||
// NewStacktrace creates a stacktrace using runtime.Callers.
|
||||
func NewStacktrace() *Stacktrace {
|
||||
pcs := make([]uintptr, 100)
|
||||
n := runtime.Callers(1, pcs)
|
||||
|
||||
if n == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
runtimeFrames := extractFrames(pcs[:n])
|
||||
frames := createFrames(runtimeFrames)
|
||||
|
||||
stacktrace := Stacktrace{
|
||||
Frames: frames,
|
||||
}
|
||||
|
||||
return &stacktrace
|
||||
}
|
||||
|
||||
// TODO: Make it configurable so that anyone can provide their own implementation?
|
||||
// Use of reflection allows us to not have a hard dependency on any given
|
||||
// package, so we don't have to import it.
|
||||
|
||||
// ExtractStacktrace creates a new Stacktrace based on the given error.
|
||||
func ExtractStacktrace(err error) *Stacktrace {
|
||||
method := extractReflectedStacktraceMethod(err)
|
||||
|
||||
var pcs []uintptr
|
||||
|
||||
if method.IsValid() {
|
||||
pcs = extractPcs(method)
|
||||
} else {
|
||||
pcs = extractXErrorsPC(err)
|
||||
}
|
||||
|
||||
if len(pcs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
runtimeFrames := extractFrames(pcs)
|
||||
frames := createFrames(runtimeFrames)
|
||||
|
||||
stacktrace := Stacktrace{
|
||||
Frames: frames,
|
||||
}
|
||||
|
||||
return &stacktrace
|
||||
}
|
||||
|
||||
func extractReflectedStacktraceMethod(err error) reflect.Value {
|
||||
errValue := reflect.ValueOf(err)
|
||||
|
||||
// https://github.com/go-errors/errors
|
||||
methodStackFrames := errValue.MethodByName("StackFrames")
|
||||
if methodStackFrames.IsValid() {
|
||||
return methodStackFrames
|
||||
}
|
||||
|
||||
// https://github.com/pkg/errors
|
||||
methodStackTrace := errValue.MethodByName("StackTrace")
|
||||
if methodStackTrace.IsValid() {
|
||||
return methodStackTrace
|
||||
}
|
||||
|
||||
// https://github.com/pingcap/errors
|
||||
methodGetStackTracer := errValue.MethodByName("GetStackTracer")
|
||||
if methodGetStackTracer.IsValid() {
|
||||
stacktracer := methodGetStackTracer.Call(nil)[0]
|
||||
stacktracerStackTrace := reflect.ValueOf(stacktracer).MethodByName("StackTrace")
|
||||
|
||||
if stacktracerStackTrace.IsValid() {
|
||||
return stacktracerStackTrace
|
||||
}
|
||||
}
|
||||
|
||||
return reflect.Value{}
|
||||
}
|
||||
|
||||
func extractPcs(method reflect.Value) []uintptr {
|
||||
var pcs []uintptr
|
||||
|
||||
stacktrace := method.Call(nil)[0]
|
||||
|
||||
if stacktrace.Kind() != reflect.Slice {
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := 0; i < stacktrace.Len(); i++ {
|
||||
pc := stacktrace.Index(i)
|
||||
|
||||
switch pc.Kind() {
|
||||
case reflect.Uintptr:
|
||||
pcs = append(pcs, uintptr(pc.Uint()))
|
||||
case reflect.Struct:
|
||||
for _, fieldName := range []string{"ProgramCounter", "PC"} {
|
||||
field := pc.FieldByName(fieldName)
|
||||
if !field.IsValid() {
|
||||
continue
|
||||
}
|
||||
if field.Kind() == reflect.Uintptr {
|
||||
pcs = append(pcs, uintptr(field.Uint()))
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return pcs
|
||||
}
|
||||
|
||||
// extractXErrorsPC extracts program counters from error values compatible with
|
||||
// the error types from golang.org/x/xerrors.
|
||||
//
|
||||
// It returns nil if err is not compatible with errors from that package or if
|
||||
// no program counters are stored in err.
|
||||
func extractXErrorsPC(err error) []uintptr {
|
||||
// This implementation uses the reflect package to avoid a hard dependency
|
||||
// on third-party packages.
|
||||
|
||||
// We don't know if err matches the expected type. For simplicity, instead
|
||||
// of trying to account for all possible ways things can go wrong, some
|
||||
// assumptions are made and if they are violated the code will panic. We
|
||||
// recover from any panic and ignore it, returning nil.
|
||||
//nolint: errcheck
|
||||
defer func() { recover() }()
|
||||
|
||||
field := reflect.ValueOf(err).Elem().FieldByName("frame") // type Frame struct{ frames [3]uintptr }
|
||||
field = field.FieldByName("frames")
|
||||
field = field.Slice(1, field.Len()) // drop first pc pointing to xerrors.New
|
||||
pc := make([]uintptr, field.Len())
|
||||
for i := 0; i < field.Len(); i++ {
|
||||
pc[i] = uintptr(field.Index(i).Uint())
|
||||
}
|
||||
return pc
|
||||
}
|
||||
|
||||
// Frame represents a function call and it's metadata. Frames are associated
|
||||
// with a Stacktrace.
|
||||
type Frame struct {
|
||||
Function string `json:"function,omitempty"`
|
||||
Symbol string `json:"symbol,omitempty"`
|
||||
// Module is, despite the name, the Sentry protocol equivalent of a Go
|
||||
// package's import path.
|
||||
Module string `json:"module,omitempty"`
|
||||
Filename string `json:"filename,omitempty"`
|
||||
AbsPath string `json:"abs_path,omitempty"`
|
||||
Lineno int `json:"lineno,omitempty"`
|
||||
Colno int `json:"colno,omitempty"`
|
||||
PreContext []string `json:"pre_context,omitempty"`
|
||||
ContextLine string `json:"context_line,omitempty"`
|
||||
PostContext []string `json:"post_context,omitempty"`
|
||||
InApp bool `json:"in_app"`
|
||||
Vars map[string]interface{} `json:"vars,omitempty"`
|
||||
// Package and the below are not used for Go stack trace frames. In
|
||||
// other platforms it refers to a container where the Module can be
|
||||
// found. For example, a Java JAR, a .NET Assembly, or a native
|
||||
// dynamic library. They exists for completeness, allowing the
|
||||
// construction and reporting of custom event payloads.
|
||||
Package string `json:"package,omitempty"`
|
||||
InstructionAddr string `json:"instruction_addr,omitempty"`
|
||||
AddrMode string `json:"addr_mode,omitempty"`
|
||||
SymbolAddr string `json:"symbol_addr,omitempty"`
|
||||
ImageAddr string `json:"image_addr,omitempty"`
|
||||
Platform string `json:"platform,omitempty"`
|
||||
StackStart bool `json:"stack_start,omitempty"`
|
||||
}
|
||||
|
||||
// NewFrame assembles a stacktrace frame out of runtime.Frame.
|
||||
func NewFrame(f runtime.Frame) Frame {
|
||||
function := f.Function
|
||||
var pkg string
|
||||
|
||||
if function != "" {
|
||||
pkg, function = splitQualifiedFunctionName(function)
|
||||
}
|
||||
|
||||
return newFrame(pkg, function, f.File, f.Line)
|
||||
}
|
||||
|
||||
// Like filepath.IsAbs() but doesn't care what platform you run this on.
|
||||
// I.e. it also recognizies `/path/to/file` when run on Windows.
|
||||
func isAbsPath(path string) bool {
|
||||
if len(path) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
// If the volume name starts with a double slash, this is an absolute path.
|
||||
if len(path) >= 1 && (path[0] == '/' || path[0] == '\\') {
|
||||
return true
|
||||
}
|
||||
|
||||
// Windows absolute path, see https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats
|
||||
if len(path) >= 3 && path[1] == ':' && (path[2] == '/' || path[2] == '\\') {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func newFrame(module string, function string, file string, line int) Frame {
|
||||
frame := Frame{
|
||||
Lineno: line,
|
||||
Module: module,
|
||||
Function: function,
|
||||
}
|
||||
|
||||
switch {
|
||||
case len(file) == 0:
|
||||
frame.Filename = unknown
|
||||
// Leave abspath as the empty string to be omitted when serializing event as JSON.
|
||||
case isAbsPath(file):
|
||||
frame.AbsPath = file
|
||||
// TODO: in the general case, it is not trivial to come up with a
|
||||
// "project relative" path with the data we have in run time.
|
||||
// We shall not use filepath.Base because it creates ambiguous paths and
|
||||
// affects the "Suspect Commits" feature.
|
||||
// For now, leave relpath empty to be omitted when serializing the event
|
||||
// as JSON. Improve this later.
|
||||
default:
|
||||
// f.File is a relative path. This may happen when the binary is built
|
||||
// with the -trimpath flag.
|
||||
frame.Filename = file
|
||||
// Omit abspath when serializing the event as JSON.
|
||||
}
|
||||
|
||||
setInAppFrame(&frame)
|
||||
|
||||
return frame
|
||||
}
|
||||
|
||||
// splitQualifiedFunctionName splits a package path-qualified function name into
|
||||
// package name and function name. Such qualified names are found in
|
||||
// runtime.Frame.Function values.
|
||||
func splitQualifiedFunctionName(name string) (pkg string, fun string) {
|
||||
pkg = packageName(name)
|
||||
if len(pkg) > 0 {
|
||||
fun = name[len(pkg)+1:]
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func extractFrames(pcs []uintptr) []runtime.Frame {
|
||||
var frames = make([]runtime.Frame, 0, len(pcs))
|
||||
callersFrames := runtime.CallersFrames(pcs)
|
||||
|
||||
for {
|
||||
callerFrame, more := callersFrames.Next()
|
||||
|
||||
frames = append(frames, callerFrame)
|
||||
|
||||
if !more {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
slices.Reverse(frames)
|
||||
return frames
|
||||
}
|
||||
|
||||
// createFrames creates Frame objects while filtering out frames that are not
|
||||
// meant to be reported to Sentry, those are frames internal to the SDK or Go.
|
||||
func createFrames(frames []runtime.Frame) []Frame {
|
||||
if len(frames) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]Frame, 0, len(frames))
|
||||
|
||||
for _, frame := range frames {
|
||||
function := frame.Function
|
||||
var pkg string
|
||||
if function != "" {
|
||||
pkg, function = splitQualifiedFunctionName(function)
|
||||
}
|
||||
|
||||
if !shouldSkipFrame(pkg) {
|
||||
result = append(result, newFrame(pkg, function, frame.File, frame.Line))
|
||||
}
|
||||
}
|
||||
|
||||
// Fix issues grouping errors with the new fully qualified function names
|
||||
// introduced from Go 1.21
|
||||
result = cleanupFunctionNamePrefix(result)
|
||||
return result
|
||||
}
|
||||
|
||||
// TODO ID: why do we want to do this?
|
||||
// I'm not aware of other SDKs skipping all Sentry frames, regardless of their position in the stactrace.
|
||||
// For example, in the .NET SDK, only the first frames are skipped until the call to the SDK.
|
||||
// As is, this will also hide any intermediate frames in the stack and make debugging issues harder.
|
||||
func shouldSkipFrame(module string) bool {
|
||||
// Skip Go internal frames.
|
||||
if module == "runtime" || module == "testing" {
|
||||
return true
|
||||
}
|
||||
|
||||
// Skip Sentry internal frames, except for frames in _test packages (for testing).
|
||||
if strings.HasPrefix(module, "github.com/getsentry/sentry-go") &&
|
||||
!strings.HasSuffix(module, "_test") {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// On Windows, GOROOT has backslashes, but we want forward slashes.
|
||||
var goRoot = strings.ReplaceAll(build.Default.GOROOT, "\\", "/")
|
||||
|
||||
func setInAppFrame(frame *Frame) {
|
||||
frame.InApp = true
|
||||
if strings.HasPrefix(frame.AbsPath, goRoot) || strings.Contains(frame.Module, "vendor") ||
|
||||
strings.Contains(frame.Module, "third_party") {
|
||||
frame.InApp = false
|
||||
}
|
||||
}
|
||||
|
||||
func callerFunctionName() string {
|
||||
pcs := make([]uintptr, 1)
|
||||
runtime.Callers(3, pcs)
|
||||
callersFrames := runtime.CallersFrames(pcs)
|
||||
callerFrame, _ := callersFrames.Next()
|
||||
return baseName(callerFrame.Function)
|
||||
}
|
||||
|
||||
// packageName returns the package part of the symbol name, or the empty string
|
||||
// if there is none.
|
||||
// It replicates https://golang.org/pkg/debug/gosym/#Sym.PackageName, avoiding a
|
||||
// dependency on debug/gosym.
|
||||
func packageName(name string) string {
|
||||
if isCompilerGeneratedSymbol(name) {
|
||||
return ""
|
||||
}
|
||||
|
||||
pathend := strings.LastIndex(name, "/")
|
||||
if pathend < 0 {
|
||||
pathend = 0
|
||||
}
|
||||
|
||||
if i := strings.Index(name[pathend:], "."); i != -1 {
|
||||
return name[:pathend+i]
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// baseName returns the symbol name without the package or receiver name.
|
||||
// It replicates https://golang.org/pkg/debug/gosym/#Sym.BaseName, avoiding a
|
||||
// dependency on debug/gosym.
|
||||
func baseName(name string) string {
|
||||
if i := strings.LastIndex(name, "."); i != -1 {
|
||||
return name[i+1:]
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
func isCompilerGeneratedSymbol(name string) bool {
|
||||
// In versions of Go 1.20 and above a prefix of "type:" and "go:" is a
|
||||
// compiler-generated symbol that doesn't belong to any package.
|
||||
// See variable reservedimports in cmd/compile/internal/gc/subr.go
|
||||
if strings.HasPrefix(name, "go:") || strings.HasPrefix(name, "type:") {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Walk backwards through the results and for the current function name
|
||||
// remove it's parent function's prefix, leaving only it's actual name. This
|
||||
// fixes issues grouping errors with the new fully qualified function names
|
||||
// introduced from Go 1.21.
|
||||
func cleanupFunctionNamePrefix(f []Frame) []Frame {
|
||||
for i := len(f) - 1; i > 0; i-- {
|
||||
name := f[i].Function
|
||||
parentName := f[i-1].Function + "."
|
||||
|
||||
if !strings.HasPrefix(name, parentName) {
|
||||
continue
|
||||
}
|
||||
|
||||
f[i].Function = name[len(parentName):]
|
||||
}
|
||||
|
||||
return f
|
||||
}
|
||||
19
vendor/github.com/getsentry/sentry-go/traces_sampler.go
generated
vendored
Normal file
19
vendor/github.com/getsentry/sentry-go/traces_sampler.go
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
package sentry
|
||||
|
||||
// A SamplingContext is passed to a TracesSampler to determine a sampling
|
||||
// decision.
|
||||
//
|
||||
// TODO(tracing): possibly expand SamplingContext to include custom /
|
||||
// user-provided data.
|
||||
type SamplingContext struct {
|
||||
Span *Span // The current span, always non-nil.
|
||||
Parent *Span // The parent span, may be nil.
|
||||
}
|
||||
|
||||
// The TracesSample type is an adapter to allow the use of ordinary
|
||||
// functions as a TracesSampler.
|
||||
type TracesSampler func(ctx SamplingContext) float64
|
||||
|
||||
func (f TracesSampler) Sample(ctx SamplingContext) float64 {
|
||||
return f(ctx)
|
||||
}
|
||||
1028
vendor/github.com/getsentry/sentry-go/tracing.go
generated
vendored
Normal file
1028
vendor/github.com/getsentry/sentry-go/tracing.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
709
vendor/github.com/getsentry/sentry-go/transport.go
generated
vendored
Normal file
709
vendor/github.com/getsentry/sentry-go/transport.go
generated
vendored
Normal file
@@ -0,0 +1,709 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/getsentry/sentry-go/internal/ratelimit"
|
||||
)
|
||||
|
||||
const defaultBufferSize = 30
|
||||
const defaultTimeout = time.Second * 30
|
||||
|
||||
// maxDrainResponseBytes is the maximum number of bytes that transport
|
||||
// implementations will read from response bodies when draining them.
|
||||
//
|
||||
// Sentry's ingestion API responses are typically short and the SDK doesn't need
|
||||
// the contents of the response body. However, the net/http HTTP client requires
|
||||
// response bodies to be fully drained (and closed) for TCP keep-alive to work.
|
||||
//
|
||||
// maxDrainResponseBytes strikes a balance between reading too much data (if the
|
||||
// server is misbehaving) and reusing TCP connections.
|
||||
const maxDrainResponseBytes = 16 << 10
|
||||
|
||||
// Transport is used by the Client to deliver events to remote server.
|
||||
type Transport interface {
|
||||
Flush(timeout time.Duration) bool
|
||||
Configure(options ClientOptions)
|
||||
SendEvent(event *Event)
|
||||
Close()
|
||||
}
|
||||
|
||||
func getProxyConfig(options ClientOptions) func(*http.Request) (*url.URL, error) {
|
||||
if options.HTTPSProxy != "" {
|
||||
return func(*http.Request) (*url.URL, error) {
|
||||
return url.Parse(options.HTTPSProxy)
|
||||
}
|
||||
}
|
||||
|
||||
if options.HTTPProxy != "" {
|
||||
return func(*http.Request) (*url.URL, error) {
|
||||
return url.Parse(options.HTTPProxy)
|
||||
}
|
||||
}
|
||||
|
||||
return http.ProxyFromEnvironment
|
||||
}
|
||||
|
||||
func getTLSConfig(options ClientOptions) *tls.Config {
|
||||
if options.CaCerts != nil {
|
||||
// #nosec G402 -- We should be using `MinVersion: tls.VersionTLS12`,
|
||||
// but we don't want to break peoples code without the major bump.
|
||||
return &tls.Config{
|
||||
RootCAs: options.CaCerts,
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func getRequestBodyFromEvent(event *Event) []byte {
|
||||
body, err := json.Marshal(event)
|
||||
if err == nil {
|
||||
return body
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("Could not encode original event as JSON. "+
|
||||
"Succeeded by removing Breadcrumbs, Contexts and Extra. "+
|
||||
"Please verify the data you attach to the scope. "+
|
||||
"Error: %s", err)
|
||||
// Try to serialize the event, with all the contextual data that allows for interface{} stripped.
|
||||
event.Breadcrumbs = nil
|
||||
event.Contexts = nil
|
||||
event.Extra = map[string]interface{}{
|
||||
"info": msg,
|
||||
}
|
||||
body, err = json.Marshal(event)
|
||||
if err == nil {
|
||||
Logger.Println(msg)
|
||||
return body
|
||||
}
|
||||
|
||||
// This should _only_ happen when Event.Exception[0].Stacktrace.Frames[0].Vars is unserializable
|
||||
// Which won't ever happen, as we don't use it now (although it's the part of public interface accepted by Sentry)
|
||||
// Juuust in case something, somehow goes utterly wrong.
|
||||
Logger.Println("Event couldn't be marshaled, even with stripped contextual data. Skipping delivery. " +
|
||||
"Please notify the SDK owners with possibly broken payload.")
|
||||
return nil
|
||||
}
|
||||
|
||||
func encodeAttachment(enc *json.Encoder, b io.Writer, attachment *Attachment) error {
|
||||
// Attachment header
|
||||
err := enc.Encode(struct {
|
||||
Type string `json:"type"`
|
||||
Length int `json:"length"`
|
||||
Filename string `json:"filename"`
|
||||
ContentType string `json:"content_type,omitempty"`
|
||||
}{
|
||||
Type: "attachment",
|
||||
Length: len(attachment.Payload),
|
||||
Filename: attachment.Filename,
|
||||
ContentType: attachment.ContentType,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Attachment payload
|
||||
if _, err = b.Write(attachment.Payload); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// "Envelopes should be terminated with a trailing newline."
|
||||
//
|
||||
// [1]: https://develop.sentry.dev/sdk/envelopes/#envelopes
|
||||
if _, err := b.Write([]byte("\n")); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func encodeEnvelopeItem(enc *json.Encoder, itemType string, body json.RawMessage) error {
|
||||
// Item header
|
||||
err := enc.Encode(struct {
|
||||
Type string `json:"type"`
|
||||
Length int `json:"length"`
|
||||
}{
|
||||
Type: itemType,
|
||||
Length: len(body),
|
||||
})
|
||||
if err == nil {
|
||||
// payload
|
||||
err = enc.Encode(body)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func envelopeFromBody(event *Event, dsn *Dsn, sentAt time.Time, body json.RawMessage) (*bytes.Buffer, error) {
|
||||
var b bytes.Buffer
|
||||
enc := json.NewEncoder(&b)
|
||||
|
||||
// Construct the trace envelope header
|
||||
var trace = map[string]string{}
|
||||
if dsc := event.sdkMetaData.dsc; dsc.HasEntries() {
|
||||
for k, v := range dsc.Entries {
|
||||
trace[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// Envelope header
|
||||
err := enc.Encode(struct {
|
||||
EventID EventID `json:"event_id"`
|
||||
SentAt time.Time `json:"sent_at"`
|
||||
Dsn string `json:"dsn"`
|
||||
Sdk map[string]string `json:"sdk"`
|
||||
Trace map[string]string `json:"trace,omitempty"`
|
||||
}{
|
||||
EventID: event.EventID,
|
||||
SentAt: sentAt,
|
||||
Trace: trace,
|
||||
Dsn: dsn.String(),
|
||||
Sdk: map[string]string{
|
||||
"name": event.Sdk.Name,
|
||||
"version": event.Sdk.Version,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
switch event.Type {
|
||||
case transactionType, checkInType:
|
||||
err = encodeEnvelopeItem(enc, event.Type, body)
|
||||
default:
|
||||
err = encodeEnvelopeItem(enc, eventType, body)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Attachments
|
||||
for _, attachment := range event.Attachments {
|
||||
if err := encodeAttachment(enc, &b, attachment); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &b, nil
|
||||
}
|
||||
|
||||
func getRequestFromEvent(ctx context.Context, event *Event, dsn *Dsn) (r *http.Request, err error) {
|
||||
defer func() {
|
||||
if r != nil {
|
||||
r.Header.Set("User-Agent", fmt.Sprintf("%s/%s", event.Sdk.Name, event.Sdk.Version))
|
||||
r.Header.Set("Content-Type", "application/x-sentry-envelope")
|
||||
|
||||
auth := fmt.Sprintf("Sentry sentry_version=%s, "+
|
||||
"sentry_client=%s/%s, sentry_key=%s", apiVersion, event.Sdk.Name, event.Sdk.Version, dsn.publicKey)
|
||||
|
||||
// The key sentry_secret is effectively deprecated and no longer needs to be set.
|
||||
// However, since it was required in older self-hosted versions,
|
||||
// it should still passed through to Sentry if set.
|
||||
if dsn.secretKey != "" {
|
||||
auth = fmt.Sprintf("%s, sentry_secret=%s", auth, dsn.secretKey)
|
||||
}
|
||||
|
||||
r.Header.Set("X-Sentry-Auth", auth)
|
||||
}
|
||||
}()
|
||||
|
||||
body := getRequestBodyFromEvent(event)
|
||||
if body == nil {
|
||||
return nil, errors.New("event could not be marshaled")
|
||||
}
|
||||
|
||||
envelope, err := envelopeFromBody(event, dsn, time.Now(), body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
return http.NewRequestWithContext(
|
||||
ctx,
|
||||
http.MethodPost,
|
||||
dsn.GetAPIURL().String(),
|
||||
envelope,
|
||||
)
|
||||
}
|
||||
|
||||
func categoryFor(eventType string) ratelimit.Category {
|
||||
switch eventType {
|
||||
case "":
|
||||
return ratelimit.CategoryError
|
||||
case transactionType:
|
||||
return ratelimit.CategoryTransaction
|
||||
default:
|
||||
return ratelimit.Category(eventType)
|
||||
}
|
||||
}
|
||||
|
||||
// ================================
|
||||
// HTTPTransport
|
||||
// ================================
|
||||
|
||||
// A batch groups items that are processed sequentially.
|
||||
type batch struct {
|
||||
items chan batchItem
|
||||
started chan struct{} // closed to signal items started to be worked on
|
||||
done chan struct{} // closed to signal completion of all items
|
||||
}
|
||||
|
||||
type batchItem struct {
|
||||
request *http.Request
|
||||
category ratelimit.Category
|
||||
}
|
||||
|
||||
// HTTPTransport is the default, non-blocking, implementation of Transport.
|
||||
//
|
||||
// Clients using this transport will enqueue requests in a buffer and return to
|
||||
// the caller before any network communication has happened. Requests are sent
|
||||
// to Sentry sequentially from a background goroutine.
|
||||
type HTTPTransport struct {
|
||||
dsn *Dsn
|
||||
client *http.Client
|
||||
transport http.RoundTripper
|
||||
|
||||
// buffer is a channel of batches. Calling Flush terminates work on the
|
||||
// current in-flight items and starts a new batch for subsequent events.
|
||||
buffer chan batch
|
||||
|
||||
start sync.Once
|
||||
|
||||
// Size of the transport buffer. Defaults to 30.
|
||||
BufferSize int
|
||||
// HTTP Client request timeout. Defaults to 30 seconds.
|
||||
Timeout time.Duration
|
||||
|
||||
mu sync.RWMutex
|
||||
limits ratelimit.Map
|
||||
|
||||
// receiving signal will terminate worker.
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
// NewHTTPTransport returns a new pre-configured instance of HTTPTransport.
|
||||
func NewHTTPTransport() *HTTPTransport {
|
||||
transport := HTTPTransport{
|
||||
BufferSize: defaultBufferSize,
|
||||
Timeout: defaultTimeout,
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
return &transport
|
||||
}
|
||||
|
||||
// Configure is called by the Client itself, providing it it's own ClientOptions.
|
||||
func (t *HTTPTransport) Configure(options ClientOptions) {
|
||||
dsn, err := NewDsn(options.Dsn)
|
||||
if err != nil {
|
||||
Logger.Printf("%v\n", err)
|
||||
return
|
||||
}
|
||||
t.dsn = dsn
|
||||
|
||||
// A buffered channel with capacity 1 works like a mutex, ensuring only one
|
||||
// goroutine can access the current batch at a given time. Access is
|
||||
// synchronized by reading from and writing to the channel.
|
||||
t.buffer = make(chan batch, 1)
|
||||
t.buffer <- batch{
|
||||
items: make(chan batchItem, t.BufferSize),
|
||||
started: make(chan struct{}),
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
|
||||
if options.HTTPTransport != nil {
|
||||
t.transport = options.HTTPTransport
|
||||
} else {
|
||||
t.transport = &http.Transport{
|
||||
Proxy: getProxyConfig(options),
|
||||
TLSClientConfig: getTLSConfig(options),
|
||||
}
|
||||
}
|
||||
|
||||
if options.HTTPClient != nil {
|
||||
t.client = options.HTTPClient
|
||||
} else {
|
||||
t.client = &http.Client{
|
||||
Transport: t.transport,
|
||||
Timeout: t.Timeout,
|
||||
}
|
||||
}
|
||||
|
||||
t.start.Do(func() {
|
||||
go t.worker()
|
||||
})
|
||||
}
|
||||
|
||||
// SendEvent assembles a new packet out of Event and sends it to the remote server.
|
||||
func (t *HTTPTransport) SendEvent(event *Event) {
|
||||
t.SendEventWithContext(context.Background(), event)
|
||||
}
|
||||
|
||||
// SendEventWithContext assembles a new packet out of Event and sends it to the remote server.
|
||||
func (t *HTTPTransport) SendEventWithContext(ctx context.Context, event *Event) {
|
||||
if t.dsn == nil {
|
||||
return
|
||||
}
|
||||
|
||||
category := categoryFor(event.Type)
|
||||
|
||||
if t.disabled(category) {
|
||||
return
|
||||
}
|
||||
|
||||
request, err := getRequestFromEvent(ctx, event, t.dsn)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// <-t.buffer is equivalent to acquiring a lock to access the current batch.
|
||||
// A few lines below, t.buffer <- b releases the lock.
|
||||
//
|
||||
// The lock must be held during the select block below to guarantee that
|
||||
// b.items is not closed while trying to send to it. Remember that sending
|
||||
// on a closed channel panics.
|
||||
//
|
||||
// Note that the select block takes a bounded amount of CPU time because of
|
||||
// the default case that is executed if sending on b.items would block. That
|
||||
// is, the event is dropped if it cannot be sent immediately to the b.items
|
||||
// channel (used as a queue).
|
||||
b := <-t.buffer
|
||||
|
||||
select {
|
||||
case b.items <- batchItem{
|
||||
request: request,
|
||||
category: category,
|
||||
}:
|
||||
var eventType string
|
||||
if event.Type == transactionType {
|
||||
eventType = "transaction"
|
||||
} else {
|
||||
eventType = fmt.Sprintf("%s event", event.Level)
|
||||
}
|
||||
Logger.Printf(
|
||||
"Sending %s [%s] to %s project: %s",
|
||||
eventType,
|
||||
event.EventID,
|
||||
t.dsn.host,
|
||||
t.dsn.projectID,
|
||||
)
|
||||
default:
|
||||
Logger.Println("Event dropped due to transport buffer being full.")
|
||||
}
|
||||
|
||||
t.buffer <- b
|
||||
}
|
||||
|
||||
// Flush waits until any buffered events are sent to the Sentry server, blocking
|
||||
// for at most the given timeout. It returns false if the timeout was reached.
|
||||
// In that case, some events may not have been sent.
|
||||
//
|
||||
// Flush should be called before terminating the program to avoid
|
||||
// unintentionally dropping events.
|
||||
//
|
||||
// Do not call Flush indiscriminately after every call to SendEvent. Instead, to
|
||||
// have the SDK send events over the network synchronously, configure it to use
|
||||
// the HTTPSyncTransport in the call to Init.
|
||||
func (t *HTTPTransport) Flush(timeout time.Duration) bool {
|
||||
toolate := time.After(timeout)
|
||||
|
||||
// Wait until processing the current batch has started or the timeout.
|
||||
//
|
||||
// We must wait until the worker has seen the current batch, because it is
|
||||
// the only way b.done will be closed. If we do not wait, there is a
|
||||
// possible execution flow in which b.done is never closed, and the only way
|
||||
// out of Flush would be waiting for the timeout, which is undesired.
|
||||
var b batch
|
||||
for {
|
||||
select {
|
||||
case b = <-t.buffer:
|
||||
select {
|
||||
case <-b.started:
|
||||
goto started
|
||||
default:
|
||||
t.buffer <- b
|
||||
}
|
||||
case <-toolate:
|
||||
goto fail
|
||||
}
|
||||
}
|
||||
|
||||
started:
|
||||
// Signal that there won't be any more items in this batch, so that the
|
||||
// worker inner loop can end.
|
||||
close(b.items)
|
||||
// Start a new batch for subsequent events.
|
||||
t.buffer <- batch{
|
||||
items: make(chan batchItem, t.BufferSize),
|
||||
started: make(chan struct{}),
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
|
||||
// Wait until the current batch is done or the timeout.
|
||||
select {
|
||||
case <-b.done:
|
||||
Logger.Println("Buffer flushed successfully.")
|
||||
return true
|
||||
case <-toolate:
|
||||
goto fail
|
||||
}
|
||||
|
||||
fail:
|
||||
Logger.Println("Buffer flushing reached the timeout.")
|
||||
return false
|
||||
}
|
||||
|
||||
// Close will terminate events sending loop.
|
||||
// It useful to prevent goroutines leak in case of multiple HTTPTransport instances initiated.
|
||||
//
|
||||
// Close should be called after Flush and before terminating the program
|
||||
// otherwise some events may be lost.
|
||||
func (t *HTTPTransport) Close() {
|
||||
close(t.done)
|
||||
}
|
||||
|
||||
func (t *HTTPTransport) worker() {
|
||||
for b := range t.buffer {
|
||||
// Signal that processing of the current batch has started.
|
||||
close(b.started)
|
||||
|
||||
// Return the batch to the buffer so that other goroutines can use it.
|
||||
// Equivalent to releasing a lock.
|
||||
t.buffer <- b
|
||||
|
||||
// Process all batch items.
|
||||
loop:
|
||||
for {
|
||||
select {
|
||||
case <-t.done:
|
||||
return
|
||||
case item, open := <-b.items:
|
||||
if !open {
|
||||
break loop
|
||||
}
|
||||
if t.disabled(item.category) {
|
||||
continue
|
||||
}
|
||||
|
||||
response, err := t.client.Do(item.request)
|
||||
if err != nil {
|
||||
Logger.Printf("There was an issue with sending an event: %v", err)
|
||||
continue
|
||||
}
|
||||
if response.StatusCode >= 400 && response.StatusCode <= 599 {
|
||||
b, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
Logger.Printf("Error while reading response code: %v", err)
|
||||
}
|
||||
Logger.Printf("Sending %s failed with the following error: %s", eventType, string(b))
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
if t.limits == nil {
|
||||
t.limits = make(ratelimit.Map)
|
||||
}
|
||||
t.limits.Merge(ratelimit.FromResponse(response))
|
||||
t.mu.Unlock()
|
||||
|
||||
// Drain body up to a limit and close it, allowing the
|
||||
// transport to reuse TCP connections.
|
||||
_, _ = io.CopyN(io.Discard, response.Body, maxDrainResponseBytes)
|
||||
response.Body.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// Signal that processing of the batch is done.
|
||||
close(b.done)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *HTTPTransport) disabled(c ratelimit.Category) bool {
|
||||
t.mu.RLock()
|
||||
defer t.mu.RUnlock()
|
||||
disabled := t.limits.IsRateLimited(c)
|
||||
if disabled {
|
||||
Logger.Printf("Too many requests for %q, backing off till: %v", c, t.limits.Deadline(c))
|
||||
}
|
||||
return disabled
|
||||
}
|
||||
|
||||
// ================================
|
||||
// HTTPSyncTransport
|
||||
// ================================
|
||||
|
||||
// HTTPSyncTransport is a blocking implementation of Transport.
|
||||
//
|
||||
// Clients using this transport will send requests to Sentry sequentially and
|
||||
// block until a response is returned.
|
||||
//
|
||||
// The blocking behavior is useful in a limited set of use cases. For example,
|
||||
// use it when deploying code to a Function as a Service ("Serverless")
|
||||
// platform, where any work happening in a background goroutine is not
|
||||
// guaranteed to execute.
|
||||
//
|
||||
// For most cases, prefer HTTPTransport.
|
||||
type HTTPSyncTransport struct {
|
||||
dsn *Dsn
|
||||
client *http.Client
|
||||
transport http.RoundTripper
|
||||
|
||||
mu sync.Mutex
|
||||
limits ratelimit.Map
|
||||
|
||||
// HTTP Client request timeout. Defaults to 30 seconds.
|
||||
Timeout time.Duration
|
||||
}
|
||||
|
||||
// NewHTTPSyncTransport returns a new pre-configured instance of HTTPSyncTransport.
|
||||
func NewHTTPSyncTransport() *HTTPSyncTransport {
|
||||
transport := HTTPSyncTransport{
|
||||
Timeout: defaultTimeout,
|
||||
limits: make(ratelimit.Map),
|
||||
}
|
||||
|
||||
return &transport
|
||||
}
|
||||
|
||||
// Configure is called by the Client itself, providing it it's own ClientOptions.
|
||||
func (t *HTTPSyncTransport) Configure(options ClientOptions) {
|
||||
dsn, err := NewDsn(options.Dsn)
|
||||
if err != nil {
|
||||
Logger.Printf("%v\n", err)
|
||||
return
|
||||
}
|
||||
t.dsn = dsn
|
||||
|
||||
if options.HTTPTransport != nil {
|
||||
t.transport = options.HTTPTransport
|
||||
} else {
|
||||
t.transport = &http.Transport{
|
||||
Proxy: getProxyConfig(options),
|
||||
TLSClientConfig: getTLSConfig(options),
|
||||
}
|
||||
}
|
||||
|
||||
if options.HTTPClient != nil {
|
||||
t.client = options.HTTPClient
|
||||
} else {
|
||||
t.client = &http.Client{
|
||||
Transport: t.transport,
|
||||
Timeout: t.Timeout,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SendEvent assembles a new packet out of Event and sends it to the remote server.
|
||||
func (t *HTTPSyncTransport) SendEvent(event *Event) {
|
||||
t.SendEventWithContext(context.Background(), event)
|
||||
}
|
||||
|
||||
func (t *HTTPSyncTransport) Close() {}
|
||||
|
||||
// SendEventWithContext assembles a new packet out of Event and sends it to the remote server.
|
||||
func (t *HTTPSyncTransport) SendEventWithContext(ctx context.Context, event *Event) {
|
||||
if t.dsn == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if t.disabled(categoryFor(event.Type)) {
|
||||
return
|
||||
}
|
||||
|
||||
request, err := getRequestFromEvent(ctx, event, t.dsn)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var eventType string
|
||||
switch {
|
||||
case event.Type == transactionType:
|
||||
eventType = "transaction"
|
||||
default:
|
||||
eventType = fmt.Sprintf("%s event", event.Level)
|
||||
}
|
||||
Logger.Printf(
|
||||
"Sending %s [%s] to %s project: %s",
|
||||
eventType,
|
||||
event.EventID,
|
||||
t.dsn.host,
|
||||
t.dsn.projectID,
|
||||
)
|
||||
|
||||
response, err := t.client.Do(request)
|
||||
if err != nil {
|
||||
Logger.Printf("There was an issue with sending an event: %v", err)
|
||||
return
|
||||
}
|
||||
if response.StatusCode >= 400 && response.StatusCode <= 599 {
|
||||
b, err := io.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
Logger.Printf("Error while reading response code: %v", err)
|
||||
}
|
||||
Logger.Printf("Sending %s failed with the following error: %s", eventType, string(b))
|
||||
}
|
||||
|
||||
t.mu.Lock()
|
||||
if t.limits == nil {
|
||||
t.limits = make(ratelimit.Map)
|
||||
}
|
||||
|
||||
t.limits.Merge(ratelimit.FromResponse(response))
|
||||
t.mu.Unlock()
|
||||
|
||||
// Drain body up to a limit and close it, allowing the
|
||||
// transport to reuse TCP connections.
|
||||
_, _ = io.CopyN(io.Discard, response.Body, maxDrainResponseBytes)
|
||||
response.Body.Close()
|
||||
}
|
||||
|
||||
// Flush is a no-op for HTTPSyncTransport. It always returns true immediately.
|
||||
func (t *HTTPSyncTransport) Flush(_ time.Duration) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (t *HTTPSyncTransport) disabled(c ratelimit.Category) bool {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
disabled := t.limits.IsRateLimited(c)
|
||||
if disabled {
|
||||
Logger.Printf("Too many requests for %q, backing off till: %v", c, t.limits.Deadline(c))
|
||||
}
|
||||
return disabled
|
||||
}
|
||||
|
||||
// ================================
|
||||
// noopTransport
|
||||
// ================================
|
||||
|
||||
// noopTransport is an implementation of Transport interface which drops all the events.
|
||||
// Only used internally when an empty DSN is provided, which effectively disables the SDK.
|
||||
type noopTransport struct{}
|
||||
|
||||
var _ Transport = noopTransport{}
|
||||
|
||||
func (noopTransport) Configure(ClientOptions) {
|
||||
Logger.Println("Sentry client initialized with an empty DSN. Using noopTransport. No events will be delivered.")
|
||||
}
|
||||
|
||||
func (noopTransport) SendEvent(*Event) {
|
||||
Logger.Println("Event dropped due to noopTransport usage.")
|
||||
}
|
||||
|
||||
func (noopTransport) Flush(time.Duration) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (noopTransport) Close() {}
|
||||
118
vendor/github.com/getsentry/sentry-go/util.go
generated
vendored
Normal file
118
vendor/github.com/getsentry/sentry-go/util.go
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
package sentry
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
exec "golang.org/x/sys/execabs"
|
||||
)
|
||||
|
||||
func uuid() string {
|
||||
id := make([]byte, 16)
|
||||
// Prefer rand.Read over rand.Reader, see https://go-review.googlesource.com/c/go/+/272326/.
|
||||
_, _ = rand.Read(id)
|
||||
id[6] &= 0x0F // clear version
|
||||
id[6] |= 0x40 // set version to 4 (random uuid)
|
||||
id[8] &= 0x3F // clear variant
|
||||
id[8] |= 0x80 // set to IETF variant
|
||||
return hex.EncodeToString(id)
|
||||
}
|
||||
|
||||
func fileExists(fileName string) bool {
|
||||
_, err := os.Stat(fileName)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// monotonicTimeSince replaces uses of time.Now() to take into account the
|
||||
// monotonic clock reading stored in start, such that duration = end - start is
|
||||
// unaffected by changes in the system wall clock.
|
||||
func monotonicTimeSince(start time.Time) (end time.Time) {
|
||||
return start.Add(time.Since(start))
|
||||
}
|
||||
|
||||
// nolint: deadcode, unused
|
||||
func prettyPrint(data interface{}) {
|
||||
dbg, _ := json.MarshalIndent(data, "", " ")
|
||||
fmt.Println(string(dbg))
|
||||
}
|
||||
|
||||
// defaultRelease attempts to guess a default release for the currently running
|
||||
// program.
|
||||
func defaultRelease() (release string) {
|
||||
// Return first non-empty environment variable known to hold release info, if any.
|
||||
envs := []string{
|
||||
"SENTRY_RELEASE",
|
||||
"HEROKU_SLUG_COMMIT",
|
||||
"SOURCE_VERSION",
|
||||
"CODEBUILD_RESOLVED_SOURCE_VERSION",
|
||||
"CIRCLE_SHA1",
|
||||
"GAE_DEPLOYMENT_ID",
|
||||
"GITHUB_SHA", // GitHub Actions - https://help.github.com/en/actions
|
||||
"COMMIT_REF", // Netlify - https://docs.netlify.com/
|
||||
"VERCEL_GIT_COMMIT_SHA", // Vercel - https://vercel.com/
|
||||
"ZEIT_GITHUB_COMMIT_SHA", // Zeit (now known as Vercel)
|
||||
"ZEIT_GITLAB_COMMIT_SHA",
|
||||
"ZEIT_BITBUCKET_COMMIT_SHA",
|
||||
}
|
||||
for _, e := range envs {
|
||||
if release = os.Getenv(e); release != "" {
|
||||
Logger.Printf("Using release from environment variable %s: %s", e, release)
|
||||
return release
|
||||
}
|
||||
}
|
||||
|
||||
if info, ok := debug.ReadBuildInfo(); ok {
|
||||
buildInfoVcsRevision := revisionFromBuildInfo(info)
|
||||
if len(buildInfoVcsRevision) > 0 {
|
||||
return buildInfoVcsRevision
|
||||
}
|
||||
}
|
||||
|
||||
// Derive a version string from Git. Example outputs:
|
||||
// v1.0.1-0-g9de4
|
||||
// v2.0-8-g77df-dirty
|
||||
// 4f72d7
|
||||
if _, err := exec.LookPath("git"); err == nil {
|
||||
cmd := exec.Command("git", "describe", "--long", "--always", "--dirty")
|
||||
b, err := cmd.Output()
|
||||
if err != nil {
|
||||
// Either Git is not available or the current directory is not a
|
||||
// Git repository.
|
||||
var s strings.Builder
|
||||
fmt.Fprintf(&s, "Release detection failed: %v", err)
|
||||
if err, ok := err.(*exec.ExitError); ok && len(err.Stderr) > 0 {
|
||||
fmt.Fprintf(&s, ": %s", err.Stderr)
|
||||
}
|
||||
Logger.Print(s.String())
|
||||
} else {
|
||||
release = strings.TrimSpace(string(b))
|
||||
Logger.Printf("Using release from Git: %s", release)
|
||||
return release
|
||||
}
|
||||
}
|
||||
|
||||
Logger.Print("Some Sentry features will not be available. See https://docs.sentry.io/product/releases/.")
|
||||
Logger.Print("To stop seeing this message, pass a Release to sentry.Init or set the SENTRY_RELEASE environment variable.")
|
||||
return ""
|
||||
}
|
||||
|
||||
func revisionFromBuildInfo(info *debug.BuildInfo) string {
|
||||
for _, setting := range info.Settings {
|
||||
if setting.Key == "vcs.revision" && setting.Value != "" {
|
||||
Logger.Printf("Using release from debug info: %s", setting.Value)
|
||||
return setting.Value
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func Pointer[T any](v T) *T {
|
||||
return &v
|
||||
}
|
||||
4
vendor/github.com/golang-jwt/jwt/v5/.gitignore
generated
vendored
Normal file
4
vendor/github.com/golang-jwt/jwt/v5/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
.DS_Store
|
||||
bin
|
||||
.idea/
|
||||
|
||||
9
vendor/github.com/golang-jwt/jwt/v5/LICENSE
generated
vendored
Normal file
9
vendor/github.com/golang-jwt/jwt/v5/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
Copyright (c) 2012 Dave Grijalva
|
||||
Copyright (c) 2021 golang-jwt maintainers
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
195
vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md
generated
vendored
Normal file
195
vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md
generated
vendored
Normal file
@@ -0,0 +1,195 @@
|
||||
# Migration Guide (v5.0.0)
|
||||
|
||||
Version `v5` contains a major rework of core functionalities in the `jwt-go`
|
||||
library. This includes support for several validation options as well as a
|
||||
re-design of the `Claims` interface. Lastly, we reworked how errors work under
|
||||
the hood, which should provide a better overall developer experience.
|
||||
|
||||
Starting from [v5.0.0](https://github.com/golang-jwt/jwt/releases/tag/v5.0.0),
|
||||
the import path will be:
|
||||
|
||||
"github.com/golang-jwt/jwt/v5"
|
||||
|
||||
For most users, changing the import path *should* suffice. However, since we
|
||||
intentionally changed and cleaned some of the public API, existing programs
|
||||
might need to be updated. The following sections describe significant changes
|
||||
and corresponding updates for existing programs.
|
||||
|
||||
## Parsing and Validation Options
|
||||
|
||||
Under the hood, a new `Validator` struct takes care of validating the claims. A
|
||||
long awaited feature has been the option to fine-tune the validation of tokens.
|
||||
This is now possible with several `ParserOption` functions that can be appended
|
||||
to most `Parse` functions, such as `ParseWithClaims`. The most important options
|
||||
and changes are:
|
||||
* Added `WithLeeway` to support specifying the leeway that is allowed when
|
||||
validating time-based claims, such as `exp` or `nbf`.
|
||||
* Changed default behavior to not check the `iat` claim. Usage of this claim
|
||||
is OPTIONAL according to the JWT RFC. The claim itself is also purely
|
||||
informational according to the RFC, so a strict validation failure is not
|
||||
recommended. If you want to check for sensible values in these claims,
|
||||
please use the `WithIssuedAt` parser option.
|
||||
* Added `WithAudience`, `WithSubject` and `WithIssuer` to support checking for
|
||||
expected `aud`, `sub` and `iss`.
|
||||
* Added `WithStrictDecoding` and `WithPaddingAllowed` options to allow
|
||||
previously global settings to enable base64 strict encoding and the parsing
|
||||
of base64 strings with padding. The latter is strictly speaking against the
|
||||
standard, but unfortunately some of the major identity providers issue some
|
||||
of these incorrect tokens. Both options are disabled by default.
|
||||
|
||||
## Changes to the `Claims` interface
|
||||
|
||||
### Complete Restructuring
|
||||
|
||||
Previously, the claims interface was satisfied with an implementation of a
|
||||
`Valid() error` function. This had several issues:
|
||||
* The different claim types (struct claims, map claims, etc.) then contained
|
||||
similar (but not 100 % identical) code of how this validation was done. This
|
||||
lead to a lot of (almost) duplicate code and was hard to maintain
|
||||
* It was not really semantically close to what a "claim" (or a set of claims)
|
||||
really is; which is a list of defined key/value pairs with a certain
|
||||
semantic meaning.
|
||||
|
||||
Since all the validation functionality is now extracted into the validator, all
|
||||
`VerifyXXX` and `Valid` functions have been removed from the `Claims` interface.
|
||||
Instead, the interface now represents a list of getters to retrieve values with
|
||||
a specific meaning. This allows us to completely decouple the validation logic
|
||||
with the underlying storage representation of the claim, which could be a
|
||||
struct, a map or even something stored in a database.
|
||||
|
||||
```go
|
||||
type Claims interface {
|
||||
GetExpirationTime() (*NumericDate, error)
|
||||
GetIssuedAt() (*NumericDate, error)
|
||||
GetNotBefore() (*NumericDate, error)
|
||||
GetIssuer() (string, error)
|
||||
GetSubject() (string, error)
|
||||
GetAudience() (ClaimStrings, error)
|
||||
}
|
||||
```
|
||||
|
||||
Users that previously directly called the `Valid` function on their claims,
|
||||
e.g., to perform validation independently of parsing/verifying a token, can now
|
||||
use the `jwt.NewValidator` function to create a `Validator` independently of the
|
||||
`Parser`.
|
||||
|
||||
```go
|
||||
var v = jwt.NewValidator(jwt.WithLeeway(5*time.Second))
|
||||
v.Validate(myClaims)
|
||||
```
|
||||
|
||||
### Supported Claim Types and Removal of `StandardClaims`
|
||||
|
||||
The two standard claim types supported by this library, `MapClaims` and
|
||||
`RegisteredClaims` both implement the necessary functions of this interface. The
|
||||
old `StandardClaims` struct, which has already been deprecated in `v4` is now
|
||||
removed.
|
||||
|
||||
Users using custom claims, in most cases, will not experience any changes in the
|
||||
behavior as long as they embedded `RegisteredClaims`. If they created a new
|
||||
claim type from scratch, they now need to implemented the proper getter
|
||||
functions.
|
||||
|
||||
### Migrating Application Specific Logic of the old `Valid`
|
||||
|
||||
Previously, users could override the `Valid` method in a custom claim, for
|
||||
example to extend the validation with application-specific claims. However, this
|
||||
was always very dangerous, since once could easily disable the standard
|
||||
validation and signature checking.
|
||||
|
||||
In order to avoid that, while still supporting the use-case, a new
|
||||
`ClaimsValidator` interface has been introduced. This interface consists of the
|
||||
`Validate() error` function. If the validator sees, that a `Claims` struct
|
||||
implements this interface, the errors returned to the `Validate` function will
|
||||
be *appended* to the regular standard validation. It is not possible to disable
|
||||
the standard validation anymore (even only by accident).
|
||||
|
||||
Usage examples can be found in [example_test.go](./example_test.go), to build
|
||||
claims structs like the following.
|
||||
|
||||
```go
|
||||
// MyCustomClaims includes all registered claims, plus Foo.
|
||||
type MyCustomClaims struct {
|
||||
Foo string `json:"foo"`
|
||||
jwt.RegisteredClaims
|
||||
}
|
||||
|
||||
// Validate can be used to execute additional application-specific claims
|
||||
// validation.
|
||||
func (m MyCustomClaims) Validate() error {
|
||||
if m.Foo != "bar" {
|
||||
return errors.New("must be foobar")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
## Changes to the `Token` and `Parser` struct
|
||||
|
||||
The previously global functions `DecodeSegment` and `EncodeSegment` were moved
|
||||
to the `Parser` and `Token` struct respectively. This will allow us in the
|
||||
future to configure the behavior of these two based on options supplied on the
|
||||
parser or the token (creation). This also removes two previously global
|
||||
variables and moves them to parser options `WithStrictDecoding` and
|
||||
`WithPaddingAllowed`.
|
||||
|
||||
In order to do that, we had to adjust the way signing methods work. Previously
|
||||
they were given a base64 encoded signature in `Verify` and were expected to
|
||||
return a base64 encoded version of the signature in `Sign`, both as a `string`.
|
||||
However, this made it necessary to have `DecodeSegment` and `EncodeSegment`
|
||||
global and was a less than perfect design because we were repeating
|
||||
encoding/decoding steps for all signing methods. Now, `Sign` and `Verify`
|
||||
operate on a decoded signature as a `[]byte`, which feels more natural for a
|
||||
cryptographic operation anyway. Lastly, `Parse` and `SignedString` take care of
|
||||
the final encoding/decoding part.
|
||||
|
||||
In addition to that, we also changed the `Signature` field on `Token` from a
|
||||
`string` to `[]byte` and this is also now populated with the decoded form. This
|
||||
is also more consistent, because the other parts of the JWT, mainly `Header` and
|
||||
`Claims` were already stored in decoded form in `Token`. Only the signature was
|
||||
stored in base64 encoded form, which was redundant with the information in the
|
||||
`Raw` field, which contains the complete token as base64.
|
||||
|
||||
```go
|
||||
type Token struct {
|
||||
Raw string // Raw contains the raw token
|
||||
Method SigningMethod // Method is the signing method used or to be used
|
||||
Header map[string]interface{} // Header is the first segment of the token in decoded form
|
||||
Claims Claims // Claims is the second segment of the token in decoded form
|
||||
Signature []byte // Signature is the third segment of the token in decoded form
|
||||
Valid bool // Valid specifies if the token is valid
|
||||
}
|
||||
```
|
||||
|
||||
Most (if not all) of these changes should not impact the normal usage of this
|
||||
library. Only users directly accessing the `Signature` field as well as
|
||||
developers of custom signing methods should be affected.
|
||||
|
||||
# Migration Guide (v4.0.0)
|
||||
|
||||
Starting from [v4.0.0](https://github.com/golang-jwt/jwt/releases/tag/v4.0.0),
|
||||
the import path will be:
|
||||
|
||||
"github.com/golang-jwt/jwt/v4"
|
||||
|
||||
The `/v4` version will be backwards compatible with existing `v3.x.y` tags in
|
||||
this repo, as well as `github.com/dgrijalva/jwt-go`. For most users this should
|
||||
be a drop-in replacement, if you're having troubles migrating, please open an
|
||||
issue.
|
||||
|
||||
You can replace all occurrences of `github.com/dgrijalva/jwt-go` or
|
||||
`github.com/golang-jwt/jwt` with `github.com/golang-jwt/jwt/v4`, either manually
|
||||
or by using tools such as `sed` or `gofmt`.
|
||||
|
||||
And then you'd typically run:
|
||||
|
||||
```
|
||||
go get github.com/golang-jwt/jwt/v4
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
# Older releases (before v3.2.0)
|
||||
|
||||
The original migration guide for older releases can be found at
|
||||
https://github.com/dgrijalva/jwt-go/blob/master/MIGRATION_GUIDE.md.
|
||||
167
vendor/github.com/golang-jwt/jwt/v5/README.md
generated
vendored
Normal file
167
vendor/github.com/golang-jwt/jwt/v5/README.md
generated
vendored
Normal file
@@ -0,0 +1,167 @@
|
||||
# jwt-go
|
||||
|
||||
[](https://github.com/golang-jwt/jwt/actions/workflows/build.yml)
|
||||
[](https://pkg.go.dev/github.com/golang-jwt/jwt/v5)
|
||||
[](https://coveralls.io/github/golang-jwt/jwt?branch=main)
|
||||
|
||||
A [go](http://www.golang.org) (or 'golang' for search engine friendliness)
|
||||
implementation of [JSON Web
|
||||
Tokens](https://datatracker.ietf.org/doc/html/rfc7519).
|
||||
|
||||
Starting with [v4.0.0](https://github.com/golang-jwt/jwt/releases/tag/v4.0.0)
|
||||
this project adds Go module support, but maintains backwards compatibility with
|
||||
older `v3.x.y` tags and upstream `github.com/dgrijalva/jwt-go`. See the
|
||||
[`MIGRATION_GUIDE.md`](./MIGRATION_GUIDE.md) for more information. Version
|
||||
v5.0.0 introduces major improvements to the validation of tokens, but is not
|
||||
entirely backwards compatible.
|
||||
|
||||
> After the original author of the library suggested migrating the maintenance
|
||||
> of `jwt-go`, a dedicated team of open source maintainers decided to clone the
|
||||
> existing library into this repository. See
|
||||
> [dgrijalva/jwt-go#462](https://github.com/dgrijalva/jwt-go/issues/462) for a
|
||||
> detailed discussion on this topic.
|
||||
|
||||
|
||||
**SECURITY NOTICE:** Some older versions of Go have a security issue in the
|
||||
crypto/elliptic. Recommendation is to upgrade to at least 1.15 See issue
|
||||
[dgrijalva/jwt-go#216](https://github.com/dgrijalva/jwt-go/issues/216) for more
|
||||
detail.
|
||||
|
||||
**SECURITY NOTICE:** It's important that you [validate the `alg` presented is
|
||||
what you
|
||||
expect](https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/).
|
||||
This library attempts to make it easy to do the right thing by requiring key
|
||||
types match the expected alg, but you should take the extra step to verify it in
|
||||
your usage. See the examples provided.
|
||||
|
||||
### Supported Go versions
|
||||
|
||||
Our support of Go versions is aligned with Go's [version release
|
||||
policy](https://golang.org/doc/devel/release#policy). So we will support a major
|
||||
version of Go until there are two newer major releases. We no longer support
|
||||
building jwt-go with unsupported Go versions, as these contain security
|
||||
vulnerabilities which will not be fixed.
|
||||
|
||||
## What the heck is a JWT?
|
||||
|
||||
JWT.io has [a great introduction](https://jwt.io/introduction) to JSON Web
|
||||
Tokens.
|
||||
|
||||
In short, it's a signed JSON object that does something useful (for example,
|
||||
authentication). It's commonly used for `Bearer` tokens in Oauth 2. A token is
|
||||
made of three parts, separated by `.`'s. The first two parts are JSON objects,
|
||||
that have been [base64url](https://datatracker.ietf.org/doc/html/rfc4648)
|
||||
encoded. The last part is the signature, encoded the same way.
|
||||
|
||||
The first part is called the header. It contains the necessary information for
|
||||
verifying the last part, the signature. For example, which encryption method
|
||||
was used for signing and what key was used.
|
||||
|
||||
The part in the middle is the interesting bit. It's called the Claims and
|
||||
contains the actual stuff you care about. Refer to [RFC
|
||||
7519](https://datatracker.ietf.org/doc/html/rfc7519) for information about
|
||||
reserved keys and the proper way to add your own.
|
||||
|
||||
## What's in the box?
|
||||
|
||||
This library supports the parsing and verification as well as the generation and
|
||||
signing of JWTs. Current supported signing algorithms are HMAC SHA, RSA,
|
||||
RSA-PSS, and ECDSA, though hooks are present for adding your own.
|
||||
|
||||
## Installation Guidelines
|
||||
|
||||
1. To install the jwt package, you first need to have
|
||||
[Go](https://go.dev/doc/install) installed, then you can use the command
|
||||
below to add `jwt-go` as a dependency in your Go program.
|
||||
|
||||
```sh
|
||||
go get -u github.com/golang-jwt/jwt/v5
|
||||
```
|
||||
|
||||
2. Import it in your code:
|
||||
|
||||
```go
|
||||
import "github.com/golang-jwt/jwt/v5"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
A detailed usage guide, including how to sign and verify tokens can be found on
|
||||
our [documentation website](https://golang-jwt.github.io/jwt/usage/create/).
|
||||
|
||||
## Examples
|
||||
|
||||
See [the project documentation](https://pkg.go.dev/github.com/golang-jwt/jwt/v5)
|
||||
for examples of usage:
|
||||
|
||||
* [Simple example of parsing and validating a
|
||||
token](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#example-Parse-Hmac)
|
||||
* [Simple example of building and signing a
|
||||
token](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#example-New-Hmac)
|
||||
* [Directory of
|
||||
Examples](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#pkg-examples)
|
||||
|
||||
## Compliance
|
||||
|
||||
This library was last reviewed to comply with [RFC
|
||||
7519](https://datatracker.ietf.org/doc/html/rfc7519) dated May 2015 with a few
|
||||
notable differences:
|
||||
|
||||
* In order to protect against accidental use of [Unsecured
|
||||
JWTs](https://datatracker.ietf.org/doc/html/rfc7519#section-6), tokens using
|
||||
`alg=none` will only be accepted if the constant
|
||||
`jwt.UnsafeAllowNoneSignatureType` is provided as the key.
|
||||
|
||||
## Project Status & Versioning
|
||||
|
||||
This library is considered production ready. Feedback and feature requests are
|
||||
appreciated. The API should be considered stable. There should be very few
|
||||
backwards-incompatible changes outside of major version updates (and only with
|
||||
good reason).
|
||||
|
||||
This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull
|
||||
requests will land on `main`. Periodically, versions will be tagged from
|
||||
`main`. You can find all the releases on [the project releases
|
||||
page](https://github.com/golang-jwt/jwt/releases).
|
||||
|
||||
**BREAKING CHANGES:*** A full list of breaking changes is available in
|
||||
`VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating
|
||||
your code.
|
||||
|
||||
## Extensions
|
||||
|
||||
This library publishes all the necessary components for adding your own signing
|
||||
methods or key functions. Simply implement the `SigningMethod` interface and
|
||||
register a factory method using `RegisterSigningMethod` or provide a
|
||||
`jwt.Keyfunc`.
|
||||
|
||||
A common use case would be integrating with different 3rd party signature
|
||||
providers, like key management services from various cloud providers or Hardware
|
||||
Security Modules (HSMs) or to implement additional standards.
|
||||
|
||||
| Extension | Purpose | Repo |
|
||||
| --------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
|
||||
| GCP | Integrates with multiple Google Cloud Platform signing tools (AppEngine, IAM API, Cloud KMS) | https://github.com/someone1/gcp-jwt-go |
|
||||
| AWS | Integrates with AWS Key Management Service, KMS | https://github.com/matelang/jwt-go-aws-kms |
|
||||
| JWKS | Provides support for JWKS ([RFC 7517](https://datatracker.ietf.org/doc/html/rfc7517)) as a `jwt.Keyfunc` | https://github.com/MicahParks/keyfunc |
|
||||
|
||||
*Disclaimer*: Unless otherwise specified, these integrations are maintained by
|
||||
third parties and should not be considered as a primary offer by any of the
|
||||
mentioned cloud providers
|
||||
|
||||
## More
|
||||
|
||||
Go package documentation can be found [on
|
||||
pkg.go.dev](https://pkg.go.dev/github.com/golang-jwt/jwt/v5). Additional
|
||||
documentation can be found on [our project
|
||||
page](https://golang-jwt.github.io/jwt/).
|
||||
|
||||
The command line utility included in this project (cmd/jwt) provides a
|
||||
straightforward example of token creation and parsing as well as a useful tool
|
||||
for debugging your own integration. You'll also find several implementation
|
||||
examples in the documentation.
|
||||
|
||||
[golang-jwt](https://github.com/orgs/golang-jwt) incorporates a modified version
|
||||
of the JWT logo, which is distributed under the terms of the [MIT
|
||||
License](https://github.com/jsonwebtoken/jsonwebtoken.github.io/blob/master/LICENSE.txt).
|
||||
19
vendor/github.com/golang-jwt/jwt/v5/SECURITY.md
generated
vendored
Normal file
19
vendor/github.com/golang-jwt/jwt/v5/SECURITY.md
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
As of February 2022 (and until this document is updated), the latest version `v4` is supported.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
If you think you found a vulnerability, and even if you are not sure, please report it to jwt-go-security@googlegroups.com or one of the other [golang-jwt maintainers](https://github.com/orgs/golang-jwt/people). Please try be explicit, describe steps to reproduce the security issue with code example(s).
|
||||
|
||||
You will receive a response within a timely manner. If the issue is confirmed, we will do our best to release a patch as soon as possible given the complexity of the problem.
|
||||
|
||||
## Public Discussions
|
||||
|
||||
Please avoid publicly discussing a potential security vulnerability.
|
||||
|
||||
Let's take this offline and find a solution first, this limits the potential impact as much as possible.
|
||||
|
||||
We appreciate your help!
|
||||
137
vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md
generated
vendored
Normal file
137
vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md
generated
vendored
Normal file
@@ -0,0 +1,137 @@
|
||||
# `jwt-go` Version History
|
||||
|
||||
The following version history is kept for historic purposes. To retrieve the current changes of each version, please refer to the change-log of the specific release versions on https://github.com/golang-jwt/jwt/releases.
|
||||
|
||||
## 4.0.0
|
||||
|
||||
* Introduces support for Go modules. The `v4` version will be backwards compatible with `v3.x.y`.
|
||||
|
||||
## 3.2.2
|
||||
|
||||
* Starting from this release, we are adopting the policy to support the most 2 recent versions of Go currently available. By the time of this release, this is Go 1.15 and 1.16 ([#28](https://github.com/golang-jwt/jwt/pull/28)).
|
||||
* Fixed a potential issue that could occur when the verification of `exp`, `iat` or `nbf` was not required and contained invalid contents, i.e. non-numeric/date. Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally reporting it to the formtech fork ([#40](https://github.com/golang-jwt/jwt/pull/40)).
|
||||
* Added support for EdDSA / ED25519 ([#36](https://github.com/golang-jwt/jwt/pull/36)).
|
||||
* Optimized allocations ([#33](https://github.com/golang-jwt/jwt/pull/33)).
|
||||
|
||||
## 3.2.1
|
||||
|
||||
* **Import Path Change**: See MIGRATION_GUIDE.md for tips on updating your code
|
||||
* Changed the import path from `github.com/dgrijalva/jwt-go` to `github.com/golang-jwt/jwt`
|
||||
* Fixed type confusing issue between `string` and `[]string` in `VerifyAudience` ([#12](https://github.com/golang-jwt/jwt/pull/12)). This fixes CVE-2020-26160
|
||||
|
||||
#### 3.2.0
|
||||
|
||||
* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation
|
||||
* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate
|
||||
* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before.
|
||||
* Deprecated `ParseFromRequestWithClaims` to simplify API in the future.
|
||||
|
||||
#### 3.1.0
|
||||
|
||||
* Improvements to `jwt` command line tool
|
||||
* Added `SkipClaimsValidation` option to `Parser`
|
||||
* Documentation updates
|
||||
|
||||
#### 3.0.0
|
||||
|
||||
* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code
|
||||
* Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods.
|
||||
* `ParseFromRequest` has been moved to `request` subpackage and usage has changed
|
||||
* The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims.
|
||||
* Other Additions and Changes
|
||||
* Added `Claims` interface type to allow users to decode the claims into a custom type
|
||||
* Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into.
|
||||
* Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage
|
||||
* Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims`
|
||||
* Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`.
|
||||
* Added several new, more specific, validation errors to error type bitmask
|
||||
* Moved examples from README to executable example files
|
||||
* Signing method registry is now thread safe
|
||||
* Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser)
|
||||
|
||||
#### 2.7.0
|
||||
|
||||
This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes.
|
||||
|
||||
* Added new option `-show` to the `jwt` command that will just output the decoded token without verifying
|
||||
* Error text for expired tokens includes how long it's been expired
|
||||
* Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM`
|
||||
* Documentation updates
|
||||
|
||||
#### 2.6.0
|
||||
|
||||
* Exposed inner error within ValidationError
|
||||
* Fixed validation errors when using UseJSONNumber flag
|
||||
* Added several unit tests
|
||||
|
||||
#### 2.5.0
|
||||
|
||||
* Added support for signing method none. You shouldn't use this. The API tries to make this clear.
|
||||
* Updated/fixed some documentation
|
||||
* Added more helpful error message when trying to parse tokens that begin with `BEARER `
|
||||
|
||||
#### 2.4.0
|
||||
|
||||
* Added new type, Parser, to allow for configuration of various parsing parameters
|
||||
* You can now specify a list of valid signing methods. Anything outside this set will be rejected.
|
||||
* You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON
|
||||
* Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go)
|
||||
* Fixed some bugs with ECDSA parsing
|
||||
|
||||
#### 2.3.0
|
||||
|
||||
* Added support for ECDSA signing methods
|
||||
* Added support for RSA PSS signing methods (requires go v1.4)
|
||||
|
||||
#### 2.2.0
|
||||
|
||||
* Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic.
|
||||
|
||||
#### 2.1.0
|
||||
|
||||
Backwards compatible API change that was missed in 2.0.0.
|
||||
|
||||
* The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte`
|
||||
|
||||
#### 2.0.0
|
||||
|
||||
There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change.
|
||||
|
||||
The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`.
|
||||
|
||||
It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`.
|
||||
|
||||
* **Compatibility Breaking Changes**
|
||||
* `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct`
|
||||
* `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct`
|
||||
* `KeyFunc` now returns `interface{}` instead of `[]byte`
|
||||
* `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key
|
||||
* `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key
|
||||
* Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type.
|
||||
* Added public package global `SigningMethodHS256`
|
||||
* Added public package global `SigningMethodHS384`
|
||||
* Added public package global `SigningMethodHS512`
|
||||
* Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type.
|
||||
* Added public package global `SigningMethodRS256`
|
||||
* Added public package global `SigningMethodRS384`
|
||||
* Added public package global `SigningMethodRS512`
|
||||
* Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged.
|
||||
* Refactored the RSA implementation to be easier to read
|
||||
* Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM`
|
||||
|
||||
## 1.0.2
|
||||
|
||||
* Fixed bug in parsing public keys from certificates
|
||||
* Added more tests around the parsing of keys for RS256
|
||||
* Code refactoring in RS256 implementation. No functional changes
|
||||
|
||||
## 1.0.1
|
||||
|
||||
* Fixed panic if RS256 signing method was passed an invalid key
|
||||
|
||||
## 1.0.0
|
||||
|
||||
* First versioned release
|
||||
* API stabilized
|
||||
* Supports creating, signing, parsing, and validating JWT tokens
|
||||
* Supports RS256 and HS256 signing methods
|
||||
16
vendor/github.com/golang-jwt/jwt/v5/claims.go
generated
vendored
Normal file
16
vendor/github.com/golang-jwt/jwt/v5/claims.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
package jwt
|
||||
|
||||
// Claims represent any form of a JWT Claims Set according to
|
||||
// https://datatracker.ietf.org/doc/html/rfc7519#section-4. In order to have a
|
||||
// common basis for validation, it is required that an implementation is able to
|
||||
// supply at least the claim names provided in
|
||||
// https://datatracker.ietf.org/doc/html/rfc7519#section-4.1 namely `exp`,
|
||||
// `iat`, `nbf`, `iss`, `sub` and `aud`.
|
||||
type Claims interface {
|
||||
GetExpirationTime() (*NumericDate, error)
|
||||
GetIssuedAt() (*NumericDate, error)
|
||||
GetNotBefore() (*NumericDate, error)
|
||||
GetIssuer() (string, error)
|
||||
GetSubject() (string, error)
|
||||
GetAudience() (ClaimStrings, error)
|
||||
}
|
||||
4
vendor/github.com/golang-jwt/jwt/v5/doc.go
generated
vendored
Normal file
4
vendor/github.com/golang-jwt/jwt/v5/doc.go
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
// Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html
|
||||
//
|
||||
// See README.md for more info.
|
||||
package jwt
|
||||
134
vendor/github.com/golang-jwt/jwt/v5/ecdsa.go
generated
vendored
Normal file
134
vendor/github.com/golang-jwt/jwt/v5/ecdsa.go
generated
vendored
Normal file
@@ -0,0 +1,134 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"math/big"
|
||||
)
|
||||
|
||||
var (
|
||||
// Sadly this is missing from crypto/ecdsa compared to crypto/rsa
|
||||
ErrECDSAVerification = errors.New("crypto/ecdsa: verification error")
|
||||
)
|
||||
|
||||
// SigningMethodECDSA implements the ECDSA family of signing methods.
|
||||
// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification
|
||||
type SigningMethodECDSA struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
KeySize int
|
||||
CurveBits int
|
||||
}
|
||||
|
||||
// Specific instances for EC256 and company
|
||||
var (
|
||||
SigningMethodES256 *SigningMethodECDSA
|
||||
SigningMethodES384 *SigningMethodECDSA
|
||||
SigningMethodES512 *SigningMethodECDSA
|
||||
)
|
||||
|
||||
func init() {
|
||||
// ES256
|
||||
SigningMethodES256 = &SigningMethodECDSA{"ES256", crypto.SHA256, 32, 256}
|
||||
RegisterSigningMethod(SigningMethodES256.Alg(), func() SigningMethod {
|
||||
return SigningMethodES256
|
||||
})
|
||||
|
||||
// ES384
|
||||
SigningMethodES384 = &SigningMethodECDSA{"ES384", crypto.SHA384, 48, 384}
|
||||
RegisterSigningMethod(SigningMethodES384.Alg(), func() SigningMethod {
|
||||
return SigningMethodES384
|
||||
})
|
||||
|
||||
// ES512
|
||||
SigningMethodES512 = &SigningMethodECDSA{"ES512", crypto.SHA512, 66, 521}
|
||||
RegisterSigningMethod(SigningMethodES512.Alg(), func() SigningMethod {
|
||||
return SigningMethodES512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodECDSA) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Verify implements token verification for the SigningMethod.
|
||||
// For this verify method, key must be an ecdsa.PublicKey struct
|
||||
func (m *SigningMethodECDSA) Verify(signingString string, sig []byte, key interface{}) error {
|
||||
// Get the key
|
||||
var ecdsaKey *ecdsa.PublicKey
|
||||
switch k := key.(type) {
|
||||
case *ecdsa.PublicKey:
|
||||
ecdsaKey = k
|
||||
default:
|
||||
return newError("ECDSA verify expects *ecdsa.PublicKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
if len(sig) != 2*m.KeySize {
|
||||
return ErrECDSAVerification
|
||||
}
|
||||
|
||||
r := big.NewInt(0).SetBytes(sig[:m.KeySize])
|
||||
s := big.NewInt(0).SetBytes(sig[m.KeySize:])
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Verify the signature
|
||||
if verifystatus := ecdsa.Verify(ecdsaKey, hasher.Sum(nil), r, s); verifystatus {
|
||||
return nil
|
||||
}
|
||||
|
||||
return ErrECDSAVerification
|
||||
}
|
||||
|
||||
// Sign implements token signing for the SigningMethod.
|
||||
// For this signing method, key must be an ecdsa.PrivateKey struct
|
||||
func (m *SigningMethodECDSA) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
// Get the key
|
||||
var ecdsaKey *ecdsa.PrivateKey
|
||||
switch k := key.(type) {
|
||||
case *ecdsa.PrivateKey:
|
||||
ecdsaKey = k
|
||||
default:
|
||||
return nil, newError("ECDSA sign expects *ecdsa.PrivateKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return nil, ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return r, s
|
||||
if r, s, err := ecdsa.Sign(rand.Reader, ecdsaKey, hasher.Sum(nil)); err == nil {
|
||||
curveBits := ecdsaKey.Curve.Params().BitSize
|
||||
|
||||
if m.CurveBits != curveBits {
|
||||
return nil, ErrInvalidKey
|
||||
}
|
||||
|
||||
keyBytes := curveBits / 8
|
||||
if curveBits%8 > 0 {
|
||||
keyBytes += 1
|
||||
}
|
||||
|
||||
// We serialize the outputs (r and s) into big-endian byte arrays
|
||||
// padded with zeros on the left to make sure the sizes work out.
|
||||
// Output must be 2*keyBytes long.
|
||||
out := make([]byte, 2*keyBytes)
|
||||
r.FillBytes(out[0:keyBytes]) // r is assigned to the first half of output.
|
||||
s.FillBytes(out[keyBytes:]) // s is assigned to the second half of output.
|
||||
|
||||
return out, nil
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
69
vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go
generated
vendored
Normal file
69
vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go
generated
vendored
Normal file
@@ -0,0 +1,69 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotECPublicKey = errors.New("key is not a valid ECDSA public key")
|
||||
ErrNotECPrivateKey = errors.New("key is not a valid ECDSA private key")
|
||||
)
|
||||
|
||||
// ParseECPrivateKeyFromPEM parses a PEM encoded Elliptic Curve Private Key Structure
|
||||
func ParseECPrivateKeyFromPEM(key []byte) (*ecdsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParseECPrivateKey(block.Bytes); err != nil {
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *ecdsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*ecdsa.PrivateKey); !ok {
|
||||
return nil, ErrNotECPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// ParseECPublicKeyFromPEM parses a PEM encoded PKCS1 or PKCS8 public key
|
||||
func ParseECPublicKeyFromPEM(key []byte) (*ecdsa.PublicKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil {
|
||||
if cert, err := x509.ParseCertificate(block.Bytes); err == nil {
|
||||
parsedKey = cert.PublicKey
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *ecdsa.PublicKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*ecdsa.PublicKey); !ok {
|
||||
return nil, ErrNotECPublicKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
79
vendor/github.com/golang-jwt/jwt/v5/ed25519.go
generated
vendored
Normal file
79
vendor/github.com/golang-jwt/jwt/v5/ed25519.go
generated
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrEd25519Verification = errors.New("ed25519: verification error")
|
||||
)
|
||||
|
||||
// SigningMethodEd25519 implements the EdDSA family.
|
||||
// Expects ed25519.PrivateKey for signing and ed25519.PublicKey for verification
|
||||
type SigningMethodEd25519 struct{}
|
||||
|
||||
// Specific instance for EdDSA
|
||||
var (
|
||||
SigningMethodEdDSA *SigningMethodEd25519
|
||||
)
|
||||
|
||||
func init() {
|
||||
SigningMethodEdDSA = &SigningMethodEd25519{}
|
||||
RegisterSigningMethod(SigningMethodEdDSA.Alg(), func() SigningMethod {
|
||||
return SigningMethodEdDSA
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodEd25519) Alg() string {
|
||||
return "EdDSA"
|
||||
}
|
||||
|
||||
// Verify implements token verification for the SigningMethod.
|
||||
// For this verify method, key must be an ed25519.PublicKey
|
||||
func (m *SigningMethodEd25519) Verify(signingString string, sig []byte, key interface{}) error {
|
||||
var ed25519Key ed25519.PublicKey
|
||||
var ok bool
|
||||
|
||||
if ed25519Key, ok = key.(ed25519.PublicKey); !ok {
|
||||
return newError("Ed25519 verify expects ed25519.PublicKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
if len(ed25519Key) != ed25519.PublicKeySize {
|
||||
return ErrInvalidKey
|
||||
}
|
||||
|
||||
// Verify the signature
|
||||
if !ed25519.Verify(ed25519Key, []byte(signingString), sig) {
|
||||
return ErrEd25519Verification
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign implements token signing for the SigningMethod.
|
||||
// For this signing method, key must be an ed25519.PrivateKey
|
||||
func (m *SigningMethodEd25519) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
var ed25519Key crypto.Signer
|
||||
var ok bool
|
||||
|
||||
if ed25519Key, ok = key.(crypto.Signer); !ok {
|
||||
return nil, newError("Ed25519 sign expects crypto.Signer", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
if _, ok := ed25519Key.Public().(ed25519.PublicKey); !ok {
|
||||
return nil, ErrInvalidKey
|
||||
}
|
||||
|
||||
// Sign the string and return the result. ed25519 performs a two-pass hash
|
||||
// as part of its algorithm. Therefore, we need to pass a non-prehashed
|
||||
// message into the Sign function, as indicated by crypto.Hash(0)
|
||||
sig, err := ed25519Key.Sign(rand.Reader, []byte(signingString), crypto.Hash(0))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sig, nil
|
||||
}
|
||||
64
vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go
generated
vendored
Normal file
64
vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ed25519"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotEdPrivateKey = errors.New("key is not a valid Ed25519 private key")
|
||||
ErrNotEdPublicKey = errors.New("key is not a valid Ed25519 public key")
|
||||
)
|
||||
|
||||
// ParseEdPrivateKeyFromPEM parses a PEM-encoded Edwards curve private key
|
||||
func ParseEdPrivateKeyFromPEM(key []byte) (crypto.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pkey ed25519.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(ed25519.PrivateKey); !ok {
|
||||
return nil, ErrNotEdPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// ParseEdPublicKeyFromPEM parses a PEM-encoded Edwards curve public key
|
||||
func ParseEdPublicKeyFromPEM(key []byte) (crypto.PublicKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pkey ed25519.PublicKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(ed25519.PublicKey); !ok {
|
||||
return nil, ErrNotEdPublicKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
49
vendor/github.com/golang-jwt/jwt/v5/errors.go
generated
vendored
Normal file
49
vendor/github.com/golang-jwt/jwt/v5/errors.go
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrInvalidKey = errors.New("key is invalid")
|
||||
ErrInvalidKeyType = errors.New("key is of invalid type")
|
||||
ErrHashUnavailable = errors.New("the requested hash function is unavailable")
|
||||
ErrTokenMalformed = errors.New("token is malformed")
|
||||
ErrTokenUnverifiable = errors.New("token is unverifiable")
|
||||
ErrTokenSignatureInvalid = errors.New("token signature is invalid")
|
||||
ErrTokenRequiredClaimMissing = errors.New("token is missing required claim")
|
||||
ErrTokenInvalidAudience = errors.New("token has invalid audience")
|
||||
ErrTokenExpired = errors.New("token is expired")
|
||||
ErrTokenUsedBeforeIssued = errors.New("token used before issued")
|
||||
ErrTokenInvalidIssuer = errors.New("token has invalid issuer")
|
||||
ErrTokenInvalidSubject = errors.New("token has invalid subject")
|
||||
ErrTokenNotValidYet = errors.New("token is not valid yet")
|
||||
ErrTokenInvalidId = errors.New("token has invalid id")
|
||||
ErrTokenInvalidClaims = errors.New("token has invalid claims")
|
||||
ErrInvalidType = errors.New("invalid type for claim")
|
||||
)
|
||||
|
||||
// joinedError is an error type that works similar to what [errors.Join]
|
||||
// produces, with the exception that it has a nice error string; mainly its
|
||||
// error messages are concatenated using a comma, rather than a newline.
|
||||
type joinedError struct {
|
||||
errs []error
|
||||
}
|
||||
|
||||
func (je joinedError) Error() string {
|
||||
msg := []string{}
|
||||
for _, err := range je.errs {
|
||||
msg = append(msg, err.Error())
|
||||
}
|
||||
|
||||
return strings.Join(msg, ", ")
|
||||
}
|
||||
|
||||
// joinErrors joins together multiple errors. Useful for scenarios where
|
||||
// multiple errors next to each other occur, e.g., in claims validation.
|
||||
func joinErrors(errs ...error) error {
|
||||
return &joinedError{
|
||||
errs: errs,
|
||||
}
|
||||
}
|
||||
47
vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go
generated
vendored
Normal file
47
vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go
generated
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
//go:build go1.20
|
||||
// +build go1.20
|
||||
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Unwrap implements the multiple error unwrapping for this error type, which is
|
||||
// possible in Go 1.20.
|
||||
func (je joinedError) Unwrap() []error {
|
||||
return je.errs
|
||||
}
|
||||
|
||||
// newError creates a new error message with a detailed error message. The
|
||||
// message will be prefixed with the contents of the supplied error type.
|
||||
// Additionally, more errors, that provide more context can be supplied which
|
||||
// will be appended to the message. This makes use of Go 1.20's possibility to
|
||||
// include more than one %w formatting directive in [fmt.Errorf].
|
||||
//
|
||||
// For example,
|
||||
//
|
||||
// newError("no keyfunc was provided", ErrTokenUnverifiable)
|
||||
//
|
||||
// will produce the error string
|
||||
//
|
||||
// "token is unverifiable: no keyfunc was provided"
|
||||
func newError(message string, err error, more ...error) error {
|
||||
var format string
|
||||
var args []any
|
||||
if message != "" {
|
||||
format = "%w: %s"
|
||||
args = []any{err, message}
|
||||
} else {
|
||||
format = "%w"
|
||||
args = []any{err}
|
||||
}
|
||||
|
||||
for _, e := range more {
|
||||
format += ": %w"
|
||||
args = append(args, e)
|
||||
}
|
||||
|
||||
err = fmt.Errorf(format, args...)
|
||||
return err
|
||||
}
|
||||
78
vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go
generated
vendored
Normal file
78
vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go
generated
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
//go:build !go1.20
|
||||
// +build !go1.20
|
||||
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Is implements checking for multiple errors using [errors.Is], since multiple
|
||||
// error unwrapping is not possible in versions less than Go 1.20.
|
||||
func (je joinedError) Is(err error) bool {
|
||||
for _, e := range je.errs {
|
||||
if errors.Is(e, err) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// wrappedErrors is a workaround for wrapping multiple errors in environments
|
||||
// where Go 1.20 is not available. It basically uses the already implemented
|
||||
// functionality of joinedError to handle multiple errors with supplies a
|
||||
// custom error message that is identical to the one we produce in Go 1.20 using
|
||||
// multiple %w directives.
|
||||
type wrappedErrors struct {
|
||||
msg string
|
||||
joinedError
|
||||
}
|
||||
|
||||
// Error returns the stored error string
|
||||
func (we wrappedErrors) Error() string {
|
||||
return we.msg
|
||||
}
|
||||
|
||||
// newError creates a new error message with a detailed error message. The
|
||||
// message will be prefixed with the contents of the supplied error type.
|
||||
// Additionally, more errors, that provide more context can be supplied which
|
||||
// will be appended to the message. Since we cannot use of Go 1.20's possibility
|
||||
// to include more than one %w formatting directive in [fmt.Errorf], we have to
|
||||
// emulate that.
|
||||
//
|
||||
// For example,
|
||||
//
|
||||
// newError("no keyfunc was provided", ErrTokenUnverifiable)
|
||||
//
|
||||
// will produce the error string
|
||||
//
|
||||
// "token is unverifiable: no keyfunc was provided"
|
||||
func newError(message string, err error, more ...error) error {
|
||||
// We cannot wrap multiple errors here with %w, so we have to be a little
|
||||
// bit creative. Basically, we are using %s instead of %w to produce the
|
||||
// same error message and then throw the result into a custom error struct.
|
||||
var format string
|
||||
var args []any
|
||||
if message != "" {
|
||||
format = "%s: %s"
|
||||
args = []any{err, message}
|
||||
} else {
|
||||
format = "%s"
|
||||
args = []any{err}
|
||||
}
|
||||
errs := []error{err}
|
||||
|
||||
for _, e := range more {
|
||||
format += ": %s"
|
||||
args = append(args, e)
|
||||
errs = append(errs, e)
|
||||
}
|
||||
|
||||
err = &wrappedErrors{
|
||||
msg: fmt.Sprintf(format, args...),
|
||||
joinedError: joinedError{errs: errs},
|
||||
}
|
||||
return err
|
||||
}
|
||||
104
vendor/github.com/golang-jwt/jwt/v5/hmac.go
generated
vendored
Normal file
104
vendor/github.com/golang-jwt/jwt/v5/hmac.go
generated
vendored
Normal file
@@ -0,0 +1,104 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/hmac"
|
||||
"errors"
|
||||
)
|
||||
|
||||
// SigningMethodHMAC implements the HMAC-SHA family of signing methods.
|
||||
// Expects key type of []byte for both signing and validation
|
||||
type SigningMethodHMAC struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
}
|
||||
|
||||
// Specific instances for HS256 and company
|
||||
var (
|
||||
SigningMethodHS256 *SigningMethodHMAC
|
||||
SigningMethodHS384 *SigningMethodHMAC
|
||||
SigningMethodHS512 *SigningMethodHMAC
|
||||
ErrSignatureInvalid = errors.New("signature is invalid")
|
||||
)
|
||||
|
||||
func init() {
|
||||
// HS256
|
||||
SigningMethodHS256 = &SigningMethodHMAC{"HS256", crypto.SHA256}
|
||||
RegisterSigningMethod(SigningMethodHS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS256
|
||||
})
|
||||
|
||||
// HS384
|
||||
SigningMethodHS384 = &SigningMethodHMAC{"HS384", crypto.SHA384}
|
||||
RegisterSigningMethod(SigningMethodHS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS384
|
||||
})
|
||||
|
||||
// HS512
|
||||
SigningMethodHS512 = &SigningMethodHMAC{"HS512", crypto.SHA512}
|
||||
RegisterSigningMethod(SigningMethodHS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodHMAC) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Verify implements token verification for the SigningMethod. Returns nil if
|
||||
// the signature is valid. Key must be []byte.
|
||||
//
|
||||
// Note it is not advised to provide a []byte which was converted from a 'human
|
||||
// readable' string using a subset of ASCII characters. To maximize entropy, you
|
||||
// should ideally be providing a []byte key which was produced from a
|
||||
// cryptographically random source, e.g. crypto/rand. Additional information
|
||||
// about this, and why we intentionally are not supporting string as a key can
|
||||
// be found on our usage guide
|
||||
// https://golang-jwt.github.io/jwt/usage/signing_methods/#signing-methods-and-key-types.
|
||||
func (m *SigningMethodHMAC) Verify(signingString string, sig []byte, key interface{}) error {
|
||||
// Verify the key is the right type
|
||||
keyBytes, ok := key.([]byte)
|
||||
if !ok {
|
||||
return newError("HMAC verify expects []byte", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Can we use the specified hashing method?
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
|
||||
// This signing method is symmetric, so we validate the signature
|
||||
// by reproducing the signature from the signing string and key, then
|
||||
// comparing that against the provided signature.
|
||||
hasher := hmac.New(m.Hash.New, keyBytes)
|
||||
hasher.Write([]byte(signingString))
|
||||
if !hmac.Equal(sig, hasher.Sum(nil)) {
|
||||
return ErrSignatureInvalid
|
||||
}
|
||||
|
||||
// No validation errors. Signature is good.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign implements token signing for the SigningMethod. Key must be []byte.
|
||||
//
|
||||
// Note it is not advised to provide a []byte which was converted from a 'human
|
||||
// readable' string using a subset of ASCII characters. To maximize entropy, you
|
||||
// should ideally be providing a []byte key which was produced from a
|
||||
// cryptographically random source, e.g. crypto/rand. Additional information
|
||||
// about this, and why we intentionally are not supporting string as a key can
|
||||
// be found on our usage guide https://golang-jwt.github.io/jwt/usage/signing_methods/.
|
||||
func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
if keyBytes, ok := key.([]byte); ok {
|
||||
if !m.Hash.Available() {
|
||||
return nil, ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := hmac.New(m.Hash.New, keyBytes)
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
return hasher.Sum(nil), nil
|
||||
}
|
||||
|
||||
return nil, newError("HMAC sign expects []byte", ErrInvalidKeyType)
|
||||
}
|
||||
109
vendor/github.com/golang-jwt/jwt/v5/map_claims.go
generated
vendored
Normal file
109
vendor/github.com/golang-jwt/jwt/v5/map_claims.go
generated
vendored
Normal file
@@ -0,0 +1,109 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// MapClaims is a claims type that uses the map[string]interface{} for JSON
|
||||
// decoding. This is the default claims type if you don't supply one
|
||||
type MapClaims map[string]interface{}
|
||||
|
||||
// GetExpirationTime implements the Claims interface.
|
||||
func (m MapClaims) GetExpirationTime() (*NumericDate, error) {
|
||||
return m.parseNumericDate("exp")
|
||||
}
|
||||
|
||||
// GetNotBefore implements the Claims interface.
|
||||
func (m MapClaims) GetNotBefore() (*NumericDate, error) {
|
||||
return m.parseNumericDate("nbf")
|
||||
}
|
||||
|
||||
// GetIssuedAt implements the Claims interface.
|
||||
func (m MapClaims) GetIssuedAt() (*NumericDate, error) {
|
||||
return m.parseNumericDate("iat")
|
||||
}
|
||||
|
||||
// GetAudience implements the Claims interface.
|
||||
func (m MapClaims) GetAudience() (ClaimStrings, error) {
|
||||
return m.parseClaimsString("aud")
|
||||
}
|
||||
|
||||
// GetIssuer implements the Claims interface.
|
||||
func (m MapClaims) GetIssuer() (string, error) {
|
||||
return m.parseString("iss")
|
||||
}
|
||||
|
||||
// GetSubject implements the Claims interface.
|
||||
func (m MapClaims) GetSubject() (string, error) {
|
||||
return m.parseString("sub")
|
||||
}
|
||||
|
||||
// parseNumericDate tries to parse a key in the map claims type as a number
|
||||
// date. This will succeed, if the underlying type is either a [float64] or a
|
||||
// [json.Number]. Otherwise, nil will be returned.
|
||||
func (m MapClaims) parseNumericDate(key string) (*NumericDate, error) {
|
||||
v, ok := m[key]
|
||||
if !ok {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
switch exp := v.(type) {
|
||||
case float64:
|
||||
if exp == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return newNumericDateFromSeconds(exp), nil
|
||||
case json.Number:
|
||||
v, _ := exp.Float64()
|
||||
|
||||
return newNumericDateFromSeconds(v), nil
|
||||
}
|
||||
|
||||
return nil, newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType)
|
||||
}
|
||||
|
||||
// parseClaimsString tries to parse a key in the map claims type as a
|
||||
// [ClaimsStrings] type, which can either be a string or an array of string.
|
||||
func (m MapClaims) parseClaimsString(key string) (ClaimStrings, error) {
|
||||
var cs []string
|
||||
switch v := m[key].(type) {
|
||||
case string:
|
||||
cs = append(cs, v)
|
||||
case []string:
|
||||
cs = v
|
||||
case []interface{}:
|
||||
for _, a := range v {
|
||||
vs, ok := a.(string)
|
||||
if !ok {
|
||||
return nil, newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType)
|
||||
}
|
||||
cs = append(cs, vs)
|
||||
}
|
||||
}
|
||||
|
||||
return cs, nil
|
||||
}
|
||||
|
||||
// parseString tries to parse a key in the map claims type as a [string] type.
|
||||
// If the key does not exist, an empty string is returned. If the key has the
|
||||
// wrong type, an error is returned.
|
||||
func (m MapClaims) parseString(key string) (string, error) {
|
||||
var (
|
||||
ok bool
|
||||
raw interface{}
|
||||
iss string
|
||||
)
|
||||
raw, ok = m[key]
|
||||
if !ok {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
iss, ok = raw.(string)
|
||||
if !ok {
|
||||
return "", newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType)
|
||||
}
|
||||
|
||||
return iss, nil
|
||||
}
|
||||
50
vendor/github.com/golang-jwt/jwt/v5/none.go
generated
vendored
Normal file
50
vendor/github.com/golang-jwt/jwt/v5/none.go
generated
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
package jwt
|
||||
|
||||
// SigningMethodNone implements the none signing method. This is required by the spec
|
||||
// but you probably should never use it.
|
||||
var SigningMethodNone *signingMethodNone
|
||||
|
||||
const UnsafeAllowNoneSignatureType unsafeNoneMagicConstant = "none signing method allowed"
|
||||
|
||||
var NoneSignatureTypeDisallowedError error
|
||||
|
||||
type signingMethodNone struct{}
|
||||
type unsafeNoneMagicConstant string
|
||||
|
||||
func init() {
|
||||
SigningMethodNone = &signingMethodNone{}
|
||||
NoneSignatureTypeDisallowedError = newError("'none' signature type is not allowed", ErrTokenUnverifiable)
|
||||
|
||||
RegisterSigningMethod(SigningMethodNone.Alg(), func() SigningMethod {
|
||||
return SigningMethodNone
|
||||
})
|
||||
}
|
||||
|
||||
func (m *signingMethodNone) Alg() string {
|
||||
return "none"
|
||||
}
|
||||
|
||||
// Only allow 'none' alg type if UnsafeAllowNoneSignatureType is specified as the key
|
||||
func (m *signingMethodNone) Verify(signingString string, sig []byte, key interface{}) (err error) {
|
||||
// Key must be UnsafeAllowNoneSignatureType to prevent accidentally
|
||||
// accepting 'none' signing method
|
||||
if _, ok := key.(unsafeNoneMagicConstant); !ok {
|
||||
return NoneSignatureTypeDisallowedError
|
||||
}
|
||||
// If signing method is none, signature must be an empty string
|
||||
if len(sig) != 0 {
|
||||
return newError("'none' signing method with non-empty signature", ErrTokenUnverifiable)
|
||||
}
|
||||
|
||||
// Accept 'none' signing method.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Only allow 'none' signing if UnsafeAllowNoneSignatureType is specified as the key
|
||||
func (m *signingMethodNone) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
if _, ok := key.(unsafeNoneMagicConstant); ok {
|
||||
return []byte{}, nil
|
||||
}
|
||||
|
||||
return nil, NoneSignatureTypeDisallowedError
|
||||
}
|
||||
238
vendor/github.com/golang-jwt/jwt/v5/parser.go
generated
vendored
Normal file
238
vendor/github.com/golang-jwt/jwt/v5/parser.go
generated
vendored
Normal file
@@ -0,0 +1,238 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Parser struct {
|
||||
// If populated, only these methods will be considered valid.
|
||||
validMethods []string
|
||||
|
||||
// Use JSON Number format in JSON decoder.
|
||||
useJSONNumber bool
|
||||
|
||||
// Skip claims validation during token parsing.
|
||||
skipClaimsValidation bool
|
||||
|
||||
validator *Validator
|
||||
|
||||
decodeStrict bool
|
||||
|
||||
decodePaddingAllowed bool
|
||||
}
|
||||
|
||||
// NewParser creates a new Parser with the specified options
|
||||
func NewParser(options ...ParserOption) *Parser {
|
||||
p := &Parser{
|
||||
validator: &Validator{},
|
||||
}
|
||||
|
||||
// Loop through our parsing options and apply them
|
||||
for _, option := range options {
|
||||
option(p)
|
||||
}
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
// Parse parses, validates, verifies the signature and returns the parsed token.
|
||||
// keyFunc will receive the parsed token and should return the key for validating.
|
||||
func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) {
|
||||
return p.ParseWithClaims(tokenString, MapClaims{}, keyFunc)
|
||||
}
|
||||
|
||||
// ParseWithClaims parses, validates, and verifies like Parse, but supplies a default object implementing the Claims
|
||||
// interface. This provides default values which can be overridden and allows a caller to use their own type, rather
|
||||
// than the default MapClaims implementation of Claims.
|
||||
//
|
||||
// Note: If you provide a custom claim implementation that embeds one of the standard claims (such as RegisteredClaims),
|
||||
// make sure that a) you either embed a non-pointer version of the claims or b) if you are using a pointer, allocate the
|
||||
// proper memory for it before passing in the overall claims, otherwise you might run into a panic.
|
||||
func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) {
|
||||
token, parts, err := p.ParseUnverified(tokenString, claims)
|
||||
if err != nil {
|
||||
return token, err
|
||||
}
|
||||
|
||||
// Verify signing method is in the required set
|
||||
if p.validMethods != nil {
|
||||
var signingMethodValid = false
|
||||
var alg = token.Method.Alg()
|
||||
for _, m := range p.validMethods {
|
||||
if m == alg {
|
||||
signingMethodValid = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !signingMethodValid {
|
||||
// signing method is not in the listed set
|
||||
return token, newError(fmt.Sprintf("signing method %v is invalid", alg), ErrTokenSignatureInvalid)
|
||||
}
|
||||
}
|
||||
|
||||
// Decode signature
|
||||
token.Signature, err = p.DecodeSegment(parts[2])
|
||||
if err != nil {
|
||||
return token, newError("could not base64 decode signature", ErrTokenMalformed, err)
|
||||
}
|
||||
text := strings.Join(parts[0:2], ".")
|
||||
|
||||
// Lookup key(s)
|
||||
if keyFunc == nil {
|
||||
// keyFunc was not provided. short circuiting validation
|
||||
return token, newError("no keyfunc was provided", ErrTokenUnverifiable)
|
||||
}
|
||||
|
||||
got, err := keyFunc(token)
|
||||
if err != nil {
|
||||
return token, newError("error while executing keyfunc", ErrTokenUnverifiable, err)
|
||||
}
|
||||
|
||||
switch have := got.(type) {
|
||||
case VerificationKeySet:
|
||||
if len(have.Keys) == 0 {
|
||||
return token, newError("keyfunc returned empty verification key set", ErrTokenUnverifiable)
|
||||
}
|
||||
// Iterate through keys and verify signature, skipping the rest when a match is found.
|
||||
// Return the last error if no match is found.
|
||||
for _, key := range have.Keys {
|
||||
if err = token.Method.Verify(text, token.Signature, key); err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
err = token.Method.Verify(text, token.Signature, have)
|
||||
}
|
||||
if err != nil {
|
||||
return token, newError("", ErrTokenSignatureInvalid, err)
|
||||
}
|
||||
|
||||
// Validate Claims
|
||||
if !p.skipClaimsValidation {
|
||||
// Make sure we have at least a default validator
|
||||
if p.validator == nil {
|
||||
p.validator = NewValidator()
|
||||
}
|
||||
|
||||
if err := p.validator.Validate(claims); err != nil {
|
||||
return token, newError("", ErrTokenInvalidClaims, err)
|
||||
}
|
||||
}
|
||||
|
||||
// No errors so far, token is valid.
|
||||
token.Valid = true
|
||||
|
||||
return token, nil
|
||||
}
|
||||
|
||||
// ParseUnverified parses the token but doesn't validate the signature.
|
||||
//
|
||||
// WARNING: Don't use this method unless you know what you're doing.
|
||||
//
|
||||
// It's only ever useful in cases where you know the signature is valid (since it has already
|
||||
// been or will be checked elsewhere in the stack) and you want to extract values from it.
|
||||
func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) {
|
||||
parts = strings.Split(tokenString, ".")
|
||||
if len(parts) != 3 {
|
||||
return nil, parts, newError("token contains an invalid number of segments", ErrTokenMalformed)
|
||||
}
|
||||
|
||||
token = &Token{Raw: tokenString}
|
||||
|
||||
// parse Header
|
||||
var headerBytes []byte
|
||||
if headerBytes, err = p.DecodeSegment(parts[0]); err != nil {
|
||||
return token, parts, newError("could not base64 decode header", ErrTokenMalformed, err)
|
||||
}
|
||||
if err = json.Unmarshal(headerBytes, &token.Header); err != nil {
|
||||
return token, parts, newError("could not JSON decode header", ErrTokenMalformed, err)
|
||||
}
|
||||
|
||||
// parse Claims
|
||||
token.Claims = claims
|
||||
|
||||
claimBytes, err := p.DecodeSegment(parts[1])
|
||||
if err != nil {
|
||||
return token, parts, newError("could not base64 decode claim", ErrTokenMalformed, err)
|
||||
}
|
||||
|
||||
// If `useJSONNumber` is enabled then we must use *json.Decoder to decode
|
||||
// the claims. However, this comes with a performance penalty so only use
|
||||
// it if we must and, otherwise, simple use json.Unmarshal.
|
||||
if !p.useJSONNumber {
|
||||
// JSON Unmarshal. Special case for map type to avoid weird pointer behavior.
|
||||
if c, ok := token.Claims.(MapClaims); ok {
|
||||
err = json.Unmarshal(claimBytes, &c)
|
||||
} else {
|
||||
err = json.Unmarshal(claimBytes, &claims)
|
||||
}
|
||||
} else {
|
||||
dec := json.NewDecoder(bytes.NewBuffer(claimBytes))
|
||||
dec.UseNumber()
|
||||
// JSON Decode. Special case for map type to avoid weird pointer behavior.
|
||||
if c, ok := token.Claims.(MapClaims); ok {
|
||||
err = dec.Decode(&c)
|
||||
} else {
|
||||
err = dec.Decode(&claims)
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return token, parts, newError("could not JSON decode claim", ErrTokenMalformed, err)
|
||||
}
|
||||
|
||||
// Lookup signature method
|
||||
if method, ok := token.Header["alg"].(string); ok {
|
||||
if token.Method = GetSigningMethod(method); token.Method == nil {
|
||||
return token, parts, newError("signing method (alg) is unavailable", ErrTokenUnverifiable)
|
||||
}
|
||||
} else {
|
||||
return token, parts, newError("signing method (alg) is unspecified", ErrTokenUnverifiable)
|
||||
}
|
||||
|
||||
return token, parts, nil
|
||||
}
|
||||
|
||||
// DecodeSegment decodes a JWT specific base64url encoding. This function will
|
||||
// take into account whether the [Parser] is configured with additional options,
|
||||
// such as [WithStrictDecoding] or [WithPaddingAllowed].
|
||||
func (p *Parser) DecodeSegment(seg string) ([]byte, error) {
|
||||
encoding := base64.RawURLEncoding
|
||||
|
||||
if p.decodePaddingAllowed {
|
||||
if l := len(seg) % 4; l > 0 {
|
||||
seg += strings.Repeat("=", 4-l)
|
||||
}
|
||||
encoding = base64.URLEncoding
|
||||
}
|
||||
|
||||
if p.decodeStrict {
|
||||
encoding = encoding.Strict()
|
||||
}
|
||||
return encoding.DecodeString(seg)
|
||||
}
|
||||
|
||||
// Parse parses, validates, verifies the signature and returns the parsed token.
|
||||
// keyFunc will receive the parsed token and should return the cryptographic key
|
||||
// for verifying the signature. The caller is strongly encouraged to set the
|
||||
// WithValidMethods option to validate the 'alg' claim in the token matches the
|
||||
// expected algorithm. For more details about the importance of validating the
|
||||
// 'alg' claim, see
|
||||
// https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/
|
||||
func Parse(tokenString string, keyFunc Keyfunc, options ...ParserOption) (*Token, error) {
|
||||
return NewParser(options...).Parse(tokenString, keyFunc)
|
||||
}
|
||||
|
||||
// ParseWithClaims is a shortcut for NewParser().ParseWithClaims().
|
||||
//
|
||||
// Note: If you provide a custom claim implementation that embeds one of the
|
||||
// standard claims (such as RegisteredClaims), make sure that a) you either
|
||||
// embed a non-pointer version of the claims or b) if you are using a pointer,
|
||||
// allocate the proper memory for it before passing in the overall claims,
|
||||
// otherwise you might run into a panic.
|
||||
func ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc, options ...ParserOption) (*Token, error) {
|
||||
return NewParser(options...).ParseWithClaims(tokenString, claims, keyFunc)
|
||||
}
|
||||
128
vendor/github.com/golang-jwt/jwt/v5/parser_option.go
generated
vendored
Normal file
128
vendor/github.com/golang-jwt/jwt/v5/parser_option.go
generated
vendored
Normal file
@@ -0,0 +1,128 @@
|
||||
package jwt
|
||||
|
||||
import "time"
|
||||
|
||||
// ParserOption is used to implement functional-style options that modify the
|
||||
// behavior of the parser. To add new options, just create a function (ideally
|
||||
// beginning with With or Without) that returns an anonymous function that takes
|
||||
// a *Parser type as input and manipulates its configuration accordingly.
|
||||
type ParserOption func(*Parser)
|
||||
|
||||
// WithValidMethods is an option to supply algorithm methods that the parser
|
||||
// will check. Only those methods will be considered valid. It is heavily
|
||||
// encouraged to use this option in order to prevent attacks such as
|
||||
// https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/.
|
||||
func WithValidMethods(methods []string) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validMethods = methods
|
||||
}
|
||||
}
|
||||
|
||||
// WithJSONNumber is an option to configure the underlying JSON parser with
|
||||
// UseNumber.
|
||||
func WithJSONNumber() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.useJSONNumber = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithoutClaimsValidation is an option to disable claims validation. This
|
||||
// option should only be used if you exactly know what you are doing.
|
||||
func WithoutClaimsValidation() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.skipClaimsValidation = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithLeeway returns the ParserOption for specifying the leeway window.
|
||||
func WithLeeway(leeway time.Duration) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.leeway = leeway
|
||||
}
|
||||
}
|
||||
|
||||
// WithTimeFunc returns the ParserOption for specifying the time func. The
|
||||
// primary use-case for this is testing. If you are looking for a way to account
|
||||
// for clock-skew, WithLeeway should be used instead.
|
||||
func WithTimeFunc(f func() time.Time) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.timeFunc = f
|
||||
}
|
||||
}
|
||||
|
||||
// WithIssuedAt returns the ParserOption to enable verification
|
||||
// of issued-at.
|
||||
func WithIssuedAt() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.verifyIat = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithExpirationRequired returns the ParserOption to make exp claim required.
|
||||
// By default exp claim is optional.
|
||||
func WithExpirationRequired() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.requireExp = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithAudience configures the validator to require the specified audience in
|
||||
// the `aud` claim. Validation will fail if the audience is not listed in the
|
||||
// token or the `aud` claim is missing.
|
||||
//
|
||||
// NOTE: While the `aud` claim is OPTIONAL in a JWT, the handling of it is
|
||||
// application-specific. Since this validation API is helping developers in
|
||||
// writing secure application, we decided to REQUIRE the existence of the claim,
|
||||
// if an audience is expected.
|
||||
func WithAudience(aud string) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.expectedAud = aud
|
||||
}
|
||||
}
|
||||
|
||||
// WithIssuer configures the validator to require the specified issuer in the
|
||||
// `iss` claim. Validation will fail if a different issuer is specified in the
|
||||
// token or the `iss` claim is missing.
|
||||
//
|
||||
// NOTE: While the `iss` claim is OPTIONAL in a JWT, the handling of it is
|
||||
// application-specific. Since this validation API is helping developers in
|
||||
// writing secure application, we decided to REQUIRE the existence of the claim,
|
||||
// if an issuer is expected.
|
||||
func WithIssuer(iss string) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.expectedIss = iss
|
||||
}
|
||||
}
|
||||
|
||||
// WithSubject configures the validator to require the specified subject in the
|
||||
// `sub` claim. Validation will fail if a different subject is specified in the
|
||||
// token or the `sub` claim is missing.
|
||||
//
|
||||
// NOTE: While the `sub` claim is OPTIONAL in a JWT, the handling of it is
|
||||
// application-specific. Since this validation API is helping developers in
|
||||
// writing secure application, we decided to REQUIRE the existence of the claim,
|
||||
// if a subject is expected.
|
||||
func WithSubject(sub string) ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.validator.expectedSub = sub
|
||||
}
|
||||
}
|
||||
|
||||
// WithPaddingAllowed will enable the codec used for decoding JWTs to allow
|
||||
// padding. Note that the JWS RFC7515 states that the tokens will utilize a
|
||||
// Base64url encoding with no padding. Unfortunately, some implementations of
|
||||
// JWT are producing non-standard tokens, and thus require support for decoding.
|
||||
func WithPaddingAllowed() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.decodePaddingAllowed = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithStrictDecoding will switch the codec used for decoding JWTs into strict
|
||||
// mode. In this mode, the decoder requires that trailing padding bits are zero,
|
||||
// as described in RFC 4648 section 3.5.
|
||||
func WithStrictDecoding() ParserOption {
|
||||
return func(p *Parser) {
|
||||
p.decodeStrict = true
|
||||
}
|
||||
}
|
||||
63
vendor/github.com/golang-jwt/jwt/v5/registered_claims.go
generated
vendored
Normal file
63
vendor/github.com/golang-jwt/jwt/v5/registered_claims.go
generated
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
package jwt
|
||||
|
||||
// RegisteredClaims are a structured version of the JWT Claims Set,
|
||||
// restricted to Registered Claim Names, as referenced at
|
||||
// https://datatracker.ietf.org/doc/html/rfc7519#section-4.1
|
||||
//
|
||||
// This type can be used on its own, but then additional private and
|
||||
// public claims embedded in the JWT will not be parsed. The typical use-case
|
||||
// therefore is to embedded this in a user-defined claim type.
|
||||
//
|
||||
// See examples for how to use this with your own claim types.
|
||||
type RegisteredClaims struct {
|
||||
// the `iss` (Issuer) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.1
|
||||
Issuer string `json:"iss,omitempty"`
|
||||
|
||||
// the `sub` (Subject) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.2
|
||||
Subject string `json:"sub,omitempty"`
|
||||
|
||||
// the `aud` (Audience) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.3
|
||||
Audience ClaimStrings `json:"aud,omitempty"`
|
||||
|
||||
// the `exp` (Expiration Time) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.4
|
||||
ExpiresAt *NumericDate `json:"exp,omitempty"`
|
||||
|
||||
// the `nbf` (Not Before) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5
|
||||
NotBefore *NumericDate `json:"nbf,omitempty"`
|
||||
|
||||
// the `iat` (Issued At) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.6
|
||||
IssuedAt *NumericDate `json:"iat,omitempty"`
|
||||
|
||||
// the `jti` (JWT ID) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.7
|
||||
ID string `json:"jti,omitempty"`
|
||||
}
|
||||
|
||||
// GetExpirationTime implements the Claims interface.
|
||||
func (c RegisteredClaims) GetExpirationTime() (*NumericDate, error) {
|
||||
return c.ExpiresAt, nil
|
||||
}
|
||||
|
||||
// GetNotBefore implements the Claims interface.
|
||||
func (c RegisteredClaims) GetNotBefore() (*NumericDate, error) {
|
||||
return c.NotBefore, nil
|
||||
}
|
||||
|
||||
// GetIssuedAt implements the Claims interface.
|
||||
func (c RegisteredClaims) GetIssuedAt() (*NumericDate, error) {
|
||||
return c.IssuedAt, nil
|
||||
}
|
||||
|
||||
// GetAudience implements the Claims interface.
|
||||
func (c RegisteredClaims) GetAudience() (ClaimStrings, error) {
|
||||
return c.Audience, nil
|
||||
}
|
||||
|
||||
// GetIssuer implements the Claims interface.
|
||||
func (c RegisteredClaims) GetIssuer() (string, error) {
|
||||
return c.Issuer, nil
|
||||
}
|
||||
|
||||
// GetSubject implements the Claims interface.
|
||||
func (c RegisteredClaims) GetSubject() (string, error) {
|
||||
return c.Subject, nil
|
||||
}
|
||||
93
vendor/github.com/golang-jwt/jwt/v5/rsa.go
generated
vendored
Normal file
93
vendor/github.com/golang-jwt/jwt/v5/rsa.go
generated
vendored
Normal file
@@ -0,0 +1,93 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
)
|
||||
|
||||
// SigningMethodRSA implements the RSA family of signing methods.
|
||||
// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation
|
||||
type SigningMethodRSA struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
}
|
||||
|
||||
// Specific instances for RS256 and company
|
||||
var (
|
||||
SigningMethodRS256 *SigningMethodRSA
|
||||
SigningMethodRS384 *SigningMethodRSA
|
||||
SigningMethodRS512 *SigningMethodRSA
|
||||
)
|
||||
|
||||
func init() {
|
||||
// RS256
|
||||
SigningMethodRS256 = &SigningMethodRSA{"RS256", crypto.SHA256}
|
||||
RegisterSigningMethod(SigningMethodRS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS256
|
||||
})
|
||||
|
||||
// RS384
|
||||
SigningMethodRS384 = &SigningMethodRSA{"RS384", crypto.SHA384}
|
||||
RegisterSigningMethod(SigningMethodRS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS384
|
||||
})
|
||||
|
||||
// RS512
|
||||
SigningMethodRS512 = &SigningMethodRSA{"RS512", crypto.SHA512}
|
||||
RegisterSigningMethod(SigningMethodRS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodRSA) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Verify implements token verification for the SigningMethod
|
||||
// For this signing method, must be an *rsa.PublicKey structure.
|
||||
func (m *SigningMethodRSA) Verify(signingString string, sig []byte, key interface{}) error {
|
||||
var rsaKey *rsa.PublicKey
|
||||
var ok bool
|
||||
|
||||
if rsaKey, ok = key.(*rsa.PublicKey); !ok {
|
||||
return newError("RSA verify expects *rsa.PublicKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Verify the signature
|
||||
return rsa.VerifyPKCS1v15(rsaKey, m.Hash, hasher.Sum(nil), sig)
|
||||
}
|
||||
|
||||
// Sign implements token signing for the SigningMethod
|
||||
// For this signing method, must be an *rsa.PrivateKey structure.
|
||||
func (m *SigningMethodRSA) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
var rsaKey *rsa.PrivateKey
|
||||
var ok bool
|
||||
|
||||
// Validate type of key
|
||||
if rsaKey, ok = key.(*rsa.PrivateKey); !ok {
|
||||
return nil, newError("RSA sign expects *rsa.PrivateKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return nil, ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return the encoded bytes
|
||||
if sigBytes, err := rsa.SignPKCS1v15(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil)); err == nil {
|
||||
return sigBytes, nil
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
135
vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go
generated
vendored
Normal file
135
vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go
generated
vendored
Normal file
@@ -0,0 +1,135 @@
|
||||
//go:build go1.4
|
||||
// +build go1.4
|
||||
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
)
|
||||
|
||||
// SigningMethodRSAPSS implements the RSAPSS family of signing methods signing methods
|
||||
type SigningMethodRSAPSS struct {
|
||||
*SigningMethodRSA
|
||||
Options *rsa.PSSOptions
|
||||
// VerifyOptions is optional. If set overrides Options for rsa.VerifyPPS.
|
||||
// Used to accept tokens signed with rsa.PSSSaltLengthAuto, what doesn't follow
|
||||
// https://tools.ietf.org/html/rfc7518#section-3.5 but was used previously.
|
||||
// See https://github.com/dgrijalva/jwt-go/issues/285#issuecomment-437451244 for details.
|
||||
VerifyOptions *rsa.PSSOptions
|
||||
}
|
||||
|
||||
// Specific instances for RS/PS and company.
|
||||
var (
|
||||
SigningMethodPS256 *SigningMethodRSAPSS
|
||||
SigningMethodPS384 *SigningMethodRSAPSS
|
||||
SigningMethodPS512 *SigningMethodRSAPSS
|
||||
)
|
||||
|
||||
func init() {
|
||||
// PS256
|
||||
SigningMethodPS256 = &SigningMethodRSAPSS{
|
||||
SigningMethodRSA: &SigningMethodRSA{
|
||||
Name: "PS256",
|
||||
Hash: crypto.SHA256,
|
||||
},
|
||||
Options: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthEqualsHash,
|
||||
},
|
||||
VerifyOptions: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS256
|
||||
})
|
||||
|
||||
// PS384
|
||||
SigningMethodPS384 = &SigningMethodRSAPSS{
|
||||
SigningMethodRSA: &SigningMethodRSA{
|
||||
Name: "PS384",
|
||||
Hash: crypto.SHA384,
|
||||
},
|
||||
Options: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthEqualsHash,
|
||||
},
|
||||
VerifyOptions: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS384
|
||||
})
|
||||
|
||||
// PS512
|
||||
SigningMethodPS512 = &SigningMethodRSAPSS{
|
||||
SigningMethodRSA: &SigningMethodRSA{
|
||||
Name: "PS512",
|
||||
Hash: crypto.SHA512,
|
||||
},
|
||||
Options: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthEqualsHash,
|
||||
},
|
||||
VerifyOptions: &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS512
|
||||
})
|
||||
}
|
||||
|
||||
// Verify implements token verification for the SigningMethod.
|
||||
// For this verify method, key must be an rsa.PublicKey struct
|
||||
func (m *SigningMethodRSAPSS) Verify(signingString string, sig []byte, key interface{}) error {
|
||||
var rsaKey *rsa.PublicKey
|
||||
switch k := key.(type) {
|
||||
case *rsa.PublicKey:
|
||||
rsaKey = k
|
||||
default:
|
||||
return newError("RSA-PSS verify expects *rsa.PublicKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
opts := m.Options
|
||||
if m.VerifyOptions != nil {
|
||||
opts = m.VerifyOptions
|
||||
}
|
||||
|
||||
return rsa.VerifyPSS(rsaKey, m.Hash, hasher.Sum(nil), sig, opts)
|
||||
}
|
||||
|
||||
// Sign implements token signing for the SigningMethod.
|
||||
// For this signing method, key must be an rsa.PrivateKey struct
|
||||
func (m *SigningMethodRSAPSS) Sign(signingString string, key interface{}) ([]byte, error) {
|
||||
var rsaKey *rsa.PrivateKey
|
||||
|
||||
switch k := key.(type) {
|
||||
case *rsa.PrivateKey:
|
||||
rsaKey = k
|
||||
default:
|
||||
return nil, newError("RSA-PSS sign expects *rsa.PrivateKey", ErrInvalidKeyType)
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return nil, ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return the encoded bytes
|
||||
if sigBytes, err := rsa.SignPSS(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil), m.Options); err == nil {
|
||||
return sigBytes, nil
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
107
vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go
generated
vendored
Normal file
107
vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go
generated
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrKeyMustBePEMEncoded = errors.New("invalid key: Key must be a PEM encoded PKCS1 or PKCS8 key")
|
||||
ErrNotRSAPrivateKey = errors.New("key is not a valid RSA private key")
|
||||
ErrNotRSAPublicKey = errors.New("key is not a valid RSA public key")
|
||||
)
|
||||
|
||||
// ParseRSAPrivateKeyFromPEM parses a PEM encoded PKCS1 or PKCS8 private key
|
||||
func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKCS1PrivateKey(block.Bytes); err != nil {
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok {
|
||||
return nil, ErrNotRSAPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// ParseRSAPrivateKeyFromPEMWithPassword parses a PEM encoded PKCS1 or PKCS8 private key protected with password
|
||||
//
|
||||
// Deprecated: This function is deprecated and should not be used anymore. It uses the deprecated x509.DecryptPEMBlock
|
||||
// function, which was deprecated since RFC 1423 is regarded insecure by design. Unfortunately, there is no alternative
|
||||
// in the Go standard library for now. See https://github.com/golang/go/issues/8860.
|
||||
func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
var parsedKey interface{}
|
||||
|
||||
var blockDecrypted []byte
|
||||
if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil {
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok {
|
||||
return nil, ErrNotRSAPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// ParseRSAPublicKeyFromPEM parses a certificate or a PEM encoded PKCS1 or PKIX public key
|
||||
func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil {
|
||||
if cert, err := x509.ParseCertificate(block.Bytes); err == nil {
|
||||
parsedKey = cert.PublicKey
|
||||
} else {
|
||||
if parsedKey, err = x509.ParsePKCS1PublicKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PublicKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PublicKey); !ok {
|
||||
return nil, ErrNotRSAPublicKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
49
vendor/github.com/golang-jwt/jwt/v5/signing_method.go
generated
vendored
Normal file
49
vendor/github.com/golang-jwt/jwt/v5/signing_method.go
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
var signingMethods = map[string]func() SigningMethod{}
|
||||
var signingMethodLock = new(sync.RWMutex)
|
||||
|
||||
// SigningMethod can be used add new methods for signing or verifying tokens. It
|
||||
// takes a decoded signature as an input in the Verify function and produces a
|
||||
// signature in Sign. The signature is then usually base64 encoded as part of a
|
||||
// JWT.
|
||||
type SigningMethod interface {
|
||||
Verify(signingString string, sig []byte, key interface{}) error // Returns nil if signature is valid
|
||||
Sign(signingString string, key interface{}) ([]byte, error) // Returns signature or error
|
||||
Alg() string // returns the alg identifier for this method (example: 'HS256')
|
||||
}
|
||||
|
||||
// RegisterSigningMethod registers the "alg" name and a factory function for signing method.
|
||||
// This is typically done during init() in the method's implementation
|
||||
func RegisterSigningMethod(alg string, f func() SigningMethod) {
|
||||
signingMethodLock.Lock()
|
||||
defer signingMethodLock.Unlock()
|
||||
|
||||
signingMethods[alg] = f
|
||||
}
|
||||
|
||||
// GetSigningMethod retrieves a signing method from an "alg" string
|
||||
func GetSigningMethod(alg string) (method SigningMethod) {
|
||||
signingMethodLock.RLock()
|
||||
defer signingMethodLock.RUnlock()
|
||||
|
||||
if methodF, ok := signingMethods[alg]; ok {
|
||||
method = methodF()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// GetAlgorithms returns a list of registered "alg" names
|
||||
func GetAlgorithms() (algs []string) {
|
||||
signingMethodLock.RLock()
|
||||
defer signingMethodLock.RUnlock()
|
||||
|
||||
for alg := range signingMethods {
|
||||
algs = append(algs, alg)
|
||||
}
|
||||
return
|
||||
}
|
||||
1
vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf
generated
vendored
Normal file
1
vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
checks = ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1023"]
|
||||
100
vendor/github.com/golang-jwt/jwt/v5/token.go
generated
vendored
Normal file
100
vendor/github.com/golang-jwt/jwt/v5/token.go
generated
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
)
|
||||
|
||||
// Keyfunc will be used by the Parse methods as a callback function to supply
|
||||
// the key for verification. The function receives the parsed, but unverified
|
||||
// Token. This allows you to use properties in the Header of the token (such as
|
||||
// `kid`) to identify which key to use.
|
||||
//
|
||||
// The returned interface{} may be a single key or a VerificationKeySet containing
|
||||
// multiple keys.
|
||||
type Keyfunc func(*Token) (interface{}, error)
|
||||
|
||||
// VerificationKey represents a public or secret key for verifying a token's signature.
|
||||
type VerificationKey interface {
|
||||
crypto.PublicKey | []uint8
|
||||
}
|
||||
|
||||
// VerificationKeySet is a set of public or secret keys. It is used by the parser to verify a token.
|
||||
type VerificationKeySet struct {
|
||||
Keys []VerificationKey
|
||||
}
|
||||
|
||||
// Token represents a JWT Token. Different fields will be used depending on
|
||||
// whether you're creating or parsing/verifying a token.
|
||||
type Token struct {
|
||||
Raw string // Raw contains the raw token. Populated when you [Parse] a token
|
||||
Method SigningMethod // Method is the signing method used or to be used
|
||||
Header map[string]interface{} // Header is the first segment of the token in decoded form
|
||||
Claims Claims // Claims is the second segment of the token in decoded form
|
||||
Signature []byte // Signature is the third segment of the token in decoded form. Populated when you Parse a token
|
||||
Valid bool // Valid specifies if the token is valid. Populated when you Parse/Verify a token
|
||||
}
|
||||
|
||||
// New creates a new [Token] with the specified signing method and an empty map
|
||||
// of claims. Additional options can be specified, but are currently unused.
|
||||
func New(method SigningMethod, opts ...TokenOption) *Token {
|
||||
return NewWithClaims(method, MapClaims{}, opts...)
|
||||
}
|
||||
|
||||
// NewWithClaims creates a new [Token] with the specified signing method and
|
||||
// claims. Additional options can be specified, but are currently unused.
|
||||
func NewWithClaims(method SigningMethod, claims Claims, opts ...TokenOption) *Token {
|
||||
return &Token{
|
||||
Header: map[string]interface{}{
|
||||
"typ": "JWT",
|
||||
"alg": method.Alg(),
|
||||
},
|
||||
Claims: claims,
|
||||
Method: method,
|
||||
}
|
||||
}
|
||||
|
||||
// SignedString creates and returns a complete, signed JWT. The token is signed
|
||||
// using the SigningMethod specified in the token. Please refer to
|
||||
// https://golang-jwt.github.io/jwt/usage/signing_methods/#signing-methods-and-key-types
|
||||
// for an overview of the different signing methods and their respective key
|
||||
// types.
|
||||
func (t *Token) SignedString(key interface{}) (string, error) {
|
||||
sstr, err := t.SigningString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
sig, err := t.Method.Sign(sstr, key)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return sstr + "." + t.EncodeSegment(sig), nil
|
||||
}
|
||||
|
||||
// SigningString generates the signing string. This is the most expensive part
|
||||
// of the whole deal. Unless you need this for something special, just go
|
||||
// straight for the SignedString.
|
||||
func (t *Token) SigningString() (string, error) {
|
||||
h, err := json.Marshal(t.Header)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
c, err := json.Marshal(t.Claims)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return t.EncodeSegment(h) + "." + t.EncodeSegment(c), nil
|
||||
}
|
||||
|
||||
// EncodeSegment encodes a JWT specific base64url encoding with padding
|
||||
// stripped. In the future, this function might take into account a
|
||||
// [TokenOption]. Therefore, this function exists as a method of [Token], rather
|
||||
// than a global function.
|
||||
func (*Token) EncodeSegment(seg []byte) string {
|
||||
return base64.RawURLEncoding.EncodeToString(seg)
|
||||
}
|
||||
5
vendor/github.com/golang-jwt/jwt/v5/token_option.go
generated
vendored
Normal file
5
vendor/github.com/golang-jwt/jwt/v5/token_option.go
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
package jwt
|
||||
|
||||
// TokenOption is a reserved type, which provides some forward compatibility,
|
||||
// if we ever want to introduce token creation-related options.
|
||||
type TokenOption func(*Token)
|
||||
149
vendor/github.com/golang-jwt/jwt/v5/types.go
generated
vendored
Normal file
149
vendor/github.com/golang-jwt/jwt/v5/types.go
generated
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TimePrecision sets the precision of times and dates within this library. This
|
||||
// has an influence on the precision of times when comparing expiry or other
|
||||
// related time fields. Furthermore, it is also the precision of times when
|
||||
// serializing.
|
||||
//
|
||||
// For backwards compatibility the default precision is set to seconds, so that
|
||||
// no fractional timestamps are generated.
|
||||
var TimePrecision = time.Second
|
||||
|
||||
// MarshalSingleStringAsArray modifies the behavior of the ClaimStrings type,
|
||||
// especially its MarshalJSON function.
|
||||
//
|
||||
// If it is set to true (the default), it will always serialize the type as an
|
||||
// array of strings, even if it just contains one element, defaulting to the
|
||||
// behavior of the underlying []string. If it is set to false, it will serialize
|
||||
// to a single string, if it contains one element. Otherwise, it will serialize
|
||||
// to an array of strings.
|
||||
var MarshalSingleStringAsArray = true
|
||||
|
||||
// NumericDate represents a JSON numeric date value, as referenced at
|
||||
// https://datatracker.ietf.org/doc/html/rfc7519#section-2.
|
||||
type NumericDate struct {
|
||||
time.Time
|
||||
}
|
||||
|
||||
// NewNumericDate constructs a new *NumericDate from a standard library time.Time struct.
|
||||
// It will truncate the timestamp according to the precision specified in TimePrecision.
|
||||
func NewNumericDate(t time.Time) *NumericDate {
|
||||
return &NumericDate{t.Truncate(TimePrecision)}
|
||||
}
|
||||
|
||||
// newNumericDateFromSeconds creates a new *NumericDate out of a float64 representing a
|
||||
// UNIX epoch with the float fraction representing non-integer seconds.
|
||||
func newNumericDateFromSeconds(f float64) *NumericDate {
|
||||
round, frac := math.Modf(f)
|
||||
return NewNumericDate(time.Unix(int64(round), int64(frac*1e9)))
|
||||
}
|
||||
|
||||
// MarshalJSON is an implementation of the json.RawMessage interface and serializes the UNIX epoch
|
||||
// represented in NumericDate to a byte array, using the precision specified in TimePrecision.
|
||||
func (date NumericDate) MarshalJSON() (b []byte, err error) {
|
||||
var prec int
|
||||
if TimePrecision < time.Second {
|
||||
prec = int(math.Log10(float64(time.Second) / float64(TimePrecision)))
|
||||
}
|
||||
truncatedDate := date.Truncate(TimePrecision)
|
||||
|
||||
// For very large timestamps, UnixNano would overflow an int64, but this
|
||||
// function requires nanosecond level precision, so we have to use the
|
||||
// following technique to get round the issue:
|
||||
//
|
||||
// 1. Take the normal unix timestamp to form the whole number part of the
|
||||
// output,
|
||||
// 2. Take the result of the Nanosecond function, which returns the offset
|
||||
// within the second of the particular unix time instance, to form the
|
||||
// decimal part of the output
|
||||
// 3. Concatenate them to produce the final result
|
||||
seconds := strconv.FormatInt(truncatedDate.Unix(), 10)
|
||||
nanosecondsOffset := strconv.FormatFloat(float64(truncatedDate.Nanosecond())/float64(time.Second), 'f', prec, 64)
|
||||
|
||||
output := append([]byte(seconds), []byte(nanosecondsOffset)[1:]...)
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
// UnmarshalJSON is an implementation of the json.RawMessage interface and
|
||||
// deserializes a [NumericDate] from a JSON representation, i.e. a
|
||||
// [json.Number]. This number represents an UNIX epoch with either integer or
|
||||
// non-integer seconds.
|
||||
func (date *NumericDate) UnmarshalJSON(b []byte) (err error) {
|
||||
var (
|
||||
number json.Number
|
||||
f float64
|
||||
)
|
||||
|
||||
if err = json.Unmarshal(b, &number); err != nil {
|
||||
return fmt.Errorf("could not parse NumericData: %w", err)
|
||||
}
|
||||
|
||||
if f, err = number.Float64(); err != nil {
|
||||
return fmt.Errorf("could not convert json number value to float: %w", err)
|
||||
}
|
||||
|
||||
n := newNumericDateFromSeconds(f)
|
||||
*date = *n
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ClaimStrings is basically just a slice of strings, but it can be either
|
||||
// serialized from a string array or just a string. This type is necessary,
|
||||
// since the "aud" claim can either be a single string or an array.
|
||||
type ClaimStrings []string
|
||||
|
||||
func (s *ClaimStrings) UnmarshalJSON(data []byte) (err error) {
|
||||
var value interface{}
|
||||
|
||||
if err = json.Unmarshal(data, &value); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var aud []string
|
||||
|
||||
switch v := value.(type) {
|
||||
case string:
|
||||
aud = append(aud, v)
|
||||
case []string:
|
||||
aud = ClaimStrings(v)
|
||||
case []interface{}:
|
||||
for _, vv := range v {
|
||||
vs, ok := vv.(string)
|
||||
if !ok {
|
||||
return ErrInvalidType
|
||||
}
|
||||
aud = append(aud, vs)
|
||||
}
|
||||
case nil:
|
||||
return nil
|
||||
default:
|
||||
return ErrInvalidType
|
||||
}
|
||||
|
||||
*s = aud
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (s ClaimStrings) MarshalJSON() (b []byte, err error) {
|
||||
// This handles a special case in the JWT RFC. If the string array, e.g.
|
||||
// used by the "aud" field, only contains one element, it MAY be serialized
|
||||
// as a single string. This may or may not be desired based on the ecosystem
|
||||
// of other JWT library used, so we make it configurable by the variable
|
||||
// MarshalSingleStringAsArray.
|
||||
if len(s) == 1 && !MarshalSingleStringAsArray {
|
||||
return json.Marshal(s[0])
|
||||
}
|
||||
|
||||
return json.Marshal([]string(s))
|
||||
}
|
||||
316
vendor/github.com/golang-jwt/jwt/v5/validator.go
generated
vendored
Normal file
316
vendor/github.com/golang-jwt/jwt/v5/validator.go
generated
vendored
Normal file
@@ -0,0 +1,316 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/subtle"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ClaimsValidator is an interface that can be implemented by custom claims who
|
||||
// wish to execute any additional claims validation based on
|
||||
// application-specific logic. The Validate function is then executed in
|
||||
// addition to the regular claims validation and any error returned is appended
|
||||
// to the final validation result.
|
||||
//
|
||||
// type MyCustomClaims struct {
|
||||
// Foo string `json:"foo"`
|
||||
// jwt.RegisteredClaims
|
||||
// }
|
||||
//
|
||||
// func (m MyCustomClaims) Validate() error {
|
||||
// if m.Foo != "bar" {
|
||||
// return errors.New("must be foobar")
|
||||
// }
|
||||
// return nil
|
||||
// }
|
||||
type ClaimsValidator interface {
|
||||
Claims
|
||||
Validate() error
|
||||
}
|
||||
|
||||
// Validator is the core of the new Validation API. It is automatically used by
|
||||
// a [Parser] during parsing and can be modified with various parser options.
|
||||
//
|
||||
// The [NewValidator] function should be used to create an instance of this
|
||||
// struct.
|
||||
type Validator struct {
|
||||
// leeway is an optional leeway that can be provided to account for clock skew.
|
||||
leeway time.Duration
|
||||
|
||||
// timeFunc is used to supply the current time that is needed for
|
||||
// validation. If unspecified, this defaults to time.Now.
|
||||
timeFunc func() time.Time
|
||||
|
||||
// requireExp specifies whether the exp claim is required
|
||||
requireExp bool
|
||||
|
||||
// verifyIat specifies whether the iat (Issued At) claim will be verified.
|
||||
// According to https://www.rfc-editor.org/rfc/rfc7519#section-4.1.6 this
|
||||
// only specifies the age of the token, but no validation check is
|
||||
// necessary. However, if wanted, it can be checked if the iat is
|
||||
// unrealistic, i.e., in the future.
|
||||
verifyIat bool
|
||||
|
||||
// expectedAud contains the audience this token expects. Supplying an empty
|
||||
// string will disable aud checking.
|
||||
expectedAud string
|
||||
|
||||
// expectedIss contains the issuer this token expects. Supplying an empty
|
||||
// string will disable iss checking.
|
||||
expectedIss string
|
||||
|
||||
// expectedSub contains the subject this token expects. Supplying an empty
|
||||
// string will disable sub checking.
|
||||
expectedSub string
|
||||
}
|
||||
|
||||
// NewValidator can be used to create a stand-alone validator with the supplied
|
||||
// options. This validator can then be used to validate already parsed claims.
|
||||
//
|
||||
// Note: Under normal circumstances, explicitly creating a validator is not
|
||||
// needed and can potentially be dangerous; instead functions of the [Parser]
|
||||
// class should be used.
|
||||
//
|
||||
// The [Validator] is only checking the *validity* of the claims, such as its
|
||||
// expiration time, but it does NOT perform *signature verification* of the
|
||||
// token.
|
||||
func NewValidator(opts ...ParserOption) *Validator {
|
||||
p := NewParser(opts...)
|
||||
return p.validator
|
||||
}
|
||||
|
||||
// Validate validates the given claims. It will also perform any custom
|
||||
// validation if claims implements the [ClaimsValidator] interface.
|
||||
//
|
||||
// Note: It will NOT perform any *signature verification* on the token that
|
||||
// contains the claims and expects that the [Claim] was already successfully
|
||||
// verified.
|
||||
func (v *Validator) Validate(claims Claims) error {
|
||||
var (
|
||||
now time.Time
|
||||
errs []error = make([]error, 0, 6)
|
||||
err error
|
||||
)
|
||||
|
||||
// Check, if we have a time func
|
||||
if v.timeFunc != nil {
|
||||
now = v.timeFunc()
|
||||
} else {
|
||||
now = time.Now()
|
||||
}
|
||||
|
||||
// We always need to check the expiration time, but usage of the claim
|
||||
// itself is OPTIONAL by default. requireExp overrides this behavior
|
||||
// and makes the exp claim mandatory.
|
||||
if err = v.verifyExpiresAt(claims, now, v.requireExp); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
// We always need to check not-before, but usage of the claim itself is
|
||||
// OPTIONAL.
|
||||
if err = v.verifyNotBefore(claims, now, false); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
// Check issued-at if the option is enabled
|
||||
if v.verifyIat {
|
||||
if err = v.verifyIssuedAt(claims, now, false); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
// If we have an expected audience, we also require the audience claim
|
||||
if v.expectedAud != "" {
|
||||
if err = v.verifyAudience(claims, v.expectedAud, true); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
// If we have an expected issuer, we also require the issuer claim
|
||||
if v.expectedIss != "" {
|
||||
if err = v.verifyIssuer(claims, v.expectedIss, true); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
// If we have an expected subject, we also require the subject claim
|
||||
if v.expectedSub != "" {
|
||||
if err = v.verifySubject(claims, v.expectedSub, true); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Finally, we want to give the claim itself some possibility to do some
|
||||
// additional custom validation based on a custom Validate function.
|
||||
cvt, ok := claims.(ClaimsValidator)
|
||||
if ok {
|
||||
if err := cvt.Validate(); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return joinErrors(errs...)
|
||||
}
|
||||
|
||||
// verifyExpiresAt compares the exp claim in claims against cmp. This function
|
||||
// will succeed if cmp < exp. Additional leeway is taken into account.
|
||||
//
|
||||
// If exp is not set, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifyExpiresAt(claims Claims, cmp time.Time, required bool) error {
|
||||
exp, err := claims.GetExpirationTime()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if exp == nil {
|
||||
return errorIfRequired(required, "exp")
|
||||
}
|
||||
|
||||
return errorIfFalse(cmp.Before((exp.Time).Add(+v.leeway)), ErrTokenExpired)
|
||||
}
|
||||
|
||||
// verifyIssuedAt compares the iat claim in claims against cmp. This function
|
||||
// will succeed if cmp >= iat. Additional leeway is taken into account.
|
||||
//
|
||||
// If iat is not set, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifyIssuedAt(claims Claims, cmp time.Time, required bool) error {
|
||||
iat, err := claims.GetIssuedAt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if iat == nil {
|
||||
return errorIfRequired(required, "iat")
|
||||
}
|
||||
|
||||
return errorIfFalse(!cmp.Before(iat.Add(-v.leeway)), ErrTokenUsedBeforeIssued)
|
||||
}
|
||||
|
||||
// verifyNotBefore compares the nbf claim in claims against cmp. This function
|
||||
// will return true if cmp >= nbf. Additional leeway is taken into account.
|
||||
//
|
||||
// If nbf is not set, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifyNotBefore(claims Claims, cmp time.Time, required bool) error {
|
||||
nbf, err := claims.GetNotBefore()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if nbf == nil {
|
||||
return errorIfRequired(required, "nbf")
|
||||
}
|
||||
|
||||
return errorIfFalse(!cmp.Before(nbf.Add(-v.leeway)), ErrTokenNotValidYet)
|
||||
}
|
||||
|
||||
// verifyAudience compares the aud claim against cmp.
|
||||
//
|
||||
// If aud is not set or an empty list, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifyAudience(claims Claims, cmp string, required bool) error {
|
||||
aud, err := claims.GetAudience()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(aud) == 0 {
|
||||
return errorIfRequired(required, "aud")
|
||||
}
|
||||
|
||||
// use a var here to keep constant time compare when looping over a number of claims
|
||||
result := false
|
||||
|
||||
var stringClaims string
|
||||
for _, a := range aud {
|
||||
if subtle.ConstantTimeCompare([]byte(a), []byte(cmp)) != 0 {
|
||||
result = true
|
||||
}
|
||||
stringClaims = stringClaims + a
|
||||
}
|
||||
|
||||
// case where "" is sent in one or many aud claims
|
||||
if stringClaims == "" {
|
||||
return errorIfRequired(required, "aud")
|
||||
}
|
||||
|
||||
return errorIfFalse(result, ErrTokenInvalidAudience)
|
||||
}
|
||||
|
||||
// verifyIssuer compares the iss claim in claims against cmp.
|
||||
//
|
||||
// If iss is not set, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifyIssuer(claims Claims, cmp string, required bool) error {
|
||||
iss, err := claims.GetIssuer()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if iss == "" {
|
||||
return errorIfRequired(required, "iss")
|
||||
}
|
||||
|
||||
return errorIfFalse(iss == cmp, ErrTokenInvalidIssuer)
|
||||
}
|
||||
|
||||
// verifySubject compares the sub claim against cmp.
|
||||
//
|
||||
// If sub is not set, it will succeed if the claim is not required,
|
||||
// otherwise ErrTokenRequiredClaimMissing will be returned.
|
||||
//
|
||||
// Additionally, if any error occurs while retrieving the claim, e.g., when its
|
||||
// the wrong type, an ErrTokenUnverifiable error will be returned.
|
||||
func (v *Validator) verifySubject(claims Claims, cmp string, required bool) error {
|
||||
sub, err := claims.GetSubject()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if sub == "" {
|
||||
return errorIfRequired(required, "sub")
|
||||
}
|
||||
|
||||
return errorIfFalse(sub == cmp, ErrTokenInvalidSubject)
|
||||
}
|
||||
|
||||
// errorIfFalse returns the error specified in err, if the value is true.
|
||||
// Otherwise, nil is returned.
|
||||
func errorIfFalse(value bool, err error) error {
|
||||
if value {
|
||||
return nil
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// errorIfRequired returns an ErrTokenRequiredClaimMissing error if required is
|
||||
// true. Otherwise, nil is returned.
|
||||
func errorIfRequired(required bool, claim string) error {
|
||||
if required {
|
||||
return newError(fmt.Sprintf("%s claim is required", claim), ErrTokenRequiredClaimMissing)
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
201
vendor/github.com/inconshreveable/mousetrap/LICENSE
generated
vendored
Normal file
201
vendor/github.com/inconshreveable/mousetrap/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2022 Alan Shreve (@inconshreveable)
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
23
vendor/github.com/inconshreveable/mousetrap/README.md
generated
vendored
Normal file
23
vendor/github.com/inconshreveable/mousetrap/README.md
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
# mousetrap
|
||||
|
||||
mousetrap is a tiny library that answers a single question.
|
||||
|
||||
On a Windows machine, was the process invoked by someone double clicking on
|
||||
the executable file while browsing in explorer?
|
||||
|
||||
### Motivation
|
||||
|
||||
Windows developers unfamiliar with command line tools will often "double-click"
|
||||
the executable for a tool. Because most CLI tools print the help and then exit
|
||||
when invoked without arguments, this is often very frustrating for those users.
|
||||
|
||||
mousetrap provides a way to detect these invocations so that you can provide
|
||||
more helpful behavior and instructions on how to run the CLI tool. To see what
|
||||
this looks like, both from an organizational and a technical perspective, see
|
||||
https://inconshreveable.com/09-09-2014/sweat-the-small-stuff/
|
||||
|
||||
### The interface
|
||||
|
||||
The library exposes a single interface:
|
||||
|
||||
func StartedByExplorer() (bool)
|
||||
16
vendor/github.com/inconshreveable/mousetrap/trap_others.go
generated
vendored
Normal file
16
vendor/github.com/inconshreveable/mousetrap/trap_others.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package mousetrap
|
||||
|
||||
// StartedByExplorer returns true if the program was invoked by the user
|
||||
// double-clicking on the executable from explorer.exe
|
||||
//
|
||||
// It is conservative and returns false if any of the internal calls fail.
|
||||
// It does not guarantee that the program was run from a terminal. It only can tell you
|
||||
// whether it was launched from explorer.exe
|
||||
//
|
||||
// On non-Windows platforms, it always returns false.
|
||||
func StartedByExplorer() bool {
|
||||
return false
|
||||
}
|
||||
42
vendor/github.com/inconshreveable/mousetrap/trap_windows.go
generated
vendored
Normal file
42
vendor/github.com/inconshreveable/mousetrap/trap_windows.go
generated
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
package mousetrap
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func getProcessEntry(pid int) (*syscall.ProcessEntry32, error) {
|
||||
snapshot, err := syscall.CreateToolhelp32Snapshot(syscall.TH32CS_SNAPPROCESS, 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer syscall.CloseHandle(snapshot)
|
||||
var procEntry syscall.ProcessEntry32
|
||||
procEntry.Size = uint32(unsafe.Sizeof(procEntry))
|
||||
if err = syscall.Process32First(snapshot, &procEntry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for {
|
||||
if procEntry.ProcessID == uint32(pid) {
|
||||
return &procEntry, nil
|
||||
}
|
||||
err = syscall.Process32Next(snapshot, &procEntry)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// StartedByExplorer returns true if the program was invoked by the user double-clicking
|
||||
// on the executable from explorer.exe
|
||||
//
|
||||
// It is conservative and returns false if any of the internal calls fail.
|
||||
// It does not guarantee that the program was run from a terminal. It only can tell you
|
||||
// whether it was launched from explorer.exe
|
||||
func StartedByExplorer() bool {
|
||||
pe, err := getProcessEntry(syscall.Getppid())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return "explorer.exe" == syscall.UTF16ToString(pe.ExeFile[:])
|
||||
}
|
||||
3
vendor/github.com/justinmichaelvieira/escpos/.gitignore
generated
vendored
Normal file
3
vendor/github.com/justinmichaelvieira/escpos/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
.idea/
|
||||
.vscode/
|
||||
.DS_Store
|
||||
21
vendor/github.com/justinmichaelvieira/escpos/LICENSE
generated
vendored
Normal file
21
vendor/github.com/justinmichaelvieira/escpos/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2021 Hendrik Fellerhoff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
95
vendor/github.com/justinmichaelvieira/escpos/Readme.md
generated
vendored
Normal file
95
vendor/github.com/justinmichaelvieira/escpos/Readme.md
generated
vendored
Normal file
@@ -0,0 +1,95 @@
|
||||
# About escpos [](https://godoc.org/github.com/hennedo/escpos)
|
||||
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fhennedo%2Fescpos?ref=badge_shield)
|
||||
[](https://pkg.go.dev/github.com/hennedo/escpos)
|
||||
|
||||
This is a [Golang](http://www.golang.org/project) package that provides
|
||||
[ESC-POS](https://en.wikipedia.org/wiki/ESC/P) library functions to help with
|
||||
sending control codes to a ESC-POS thermal printer.
|
||||
|
||||
It was largely inspired by [seer-robotics/escpos](https://github.com/seer-robotics/escpos) but is a complete rewrite.
|
||||
|
||||
It implements the protocol described in [this Command Manual](https://pos-x.com/download/escpos-programming-manual/)
|
||||
|
||||
## Current featureset
|
||||
* [x] Initializing the Printer
|
||||
* [x] Toggling Underline mode
|
||||
* [x] Toggling Bold text
|
||||
* [x] Toggling upside-down character printing
|
||||
* [x] Toggling Reverse mode
|
||||
* [x] Linespace settings
|
||||
* [x] Rotated characters
|
||||
* [x] Align text
|
||||
* [x] Default ASCII Charset, Western Europe and GBK encoding
|
||||
* [x] Character size settings
|
||||
* [x] UPC-A, UPC-E, EAN13, EAN8 Barcodes
|
||||
* [x] QR Codes
|
||||
* [x] Standard printing mode
|
||||
* [x] Image Printing
|
||||
* [x] Printing of predefined NV images
|
||||
|
||||
## Installation ##
|
||||
|
||||
Install the package via the following:
|
||||
|
||||
go get -u github.com/hennedo/escpos
|
||||
|
||||
## Usage ##
|
||||
|
||||
The escpos package can be used as the following:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/hennedo/escpos"
|
||||
"net"
|
||||
)
|
||||
|
||||
func main() {
|
||||
socket, err := net.Dial("tcp", "192.168.8.40:9100")
|
||||
if err != nil {
|
||||
println(err.Error())
|
||||
}
|
||||
defer socket.Close()
|
||||
|
||||
p := escpos.New(socket)
|
||||
p.SetConfig(escpos.ConfigEpsonTMT20II)
|
||||
|
||||
p.Bold(true).Size(2, 2).Write("Hello World")
|
||||
p.LineFeed()
|
||||
p.Bold(false).Underline(2).Justify(escpos.JustifyCenter).Write("this is underlined")
|
||||
p.LineFeed()
|
||||
p.QRCode("https://github.com/hennedo/escpos", true, 10, escpos.QRCodeErrorCorrectionLevelH)
|
||||
|
||||
|
||||
|
||||
// You need to use either p.Print() or p.PrintAndCut() at the end to send the data to the printer.
|
||||
p.PrintAndCut()
|
||||
}
|
||||
```
|
||||
|
||||
## Disable features ##
|
||||
|
||||
As the library sets all the styling parameters again for each call of Write, you might run into compatibility issues. Therefore it is possible to deactivate features.
|
||||
To do so, use a predefined config (available for all printers listed under [Compatibility](#Compatibility)) right after the escpos.New call
|
||||
|
||||
```go
|
||||
p := escpos.New(socket)
|
||||
p.SetConfig(escpos.ConfigEpsonTMT20II) // predefined config for the Epson TM-T20II
|
||||
|
||||
// or for example
|
||||
|
||||
p.SetConfig(escpos.PrinterConfig(DisableUnderline: true))
|
||||
```
|
||||
|
||||
## Compatibility ##
|
||||
|
||||
This is a (not complete) list of supported and tested devices.
|
||||
|
||||
| Manufacturer | Model | Styling | Barcodes | QR Codes | Images |
|
||||
| ------------ | -------- | --------- | -------- | -------- | ------ |
|
||||
| Epson | TM-T20II | ✅ | ✅ | ✅ | ✅ |
|
||||
| Epson | TM-T88II | ☑️<br/>UpsideDown Printing not supported | ✅ | | ✅ |
|
||||
|
||||
## License
|
||||
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fhennedo%2Fescpos?ref=badge_large)
|
||||
143
vendor/github.com/justinmichaelvieira/escpos/bitimage.go
generated
vendored
Normal file
143
vendor/github.com/justinmichaelvieira/escpos/bitimage.go
generated
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
// stolen and modified from https://github.com/mugli/png2escpos
|
||||
package escpos
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"image"
|
||||
)
|
||||
|
||||
func closestNDivisibleBy8(n int) int {
|
||||
q := n / 8
|
||||
n1 := q * 8
|
||||
|
||||
return n1
|
||||
}
|
||||
|
||||
func printImage(img image.Image) (xL byte, xH byte, yL byte, yH byte, data []byte) {
|
||||
width, height, pixels := getPixels(img)
|
||||
|
||||
removeTransparency(&pixels)
|
||||
makeGrayscale(&pixels)
|
||||
|
||||
printWidth := closestNDivisibleBy8(width)
|
||||
printHeight := closestNDivisibleBy8(height)
|
||||
bytes, _ := rasterize(printWidth, printHeight, &pixels)
|
||||
|
||||
return byte((printWidth >> 3) & 0xff), byte(((printWidth >> 3) >> 8) & 0xff), byte(printHeight & 0xff), byte((printHeight >> 8) & 0xff), bytes
|
||||
}
|
||||
|
||||
func makeGrayscale(pixels *[][]pixel) {
|
||||
height := len(*pixels)
|
||||
width := len((*pixels)[0])
|
||||
|
||||
for y := 0; y < height; y++ {
|
||||
row := (*pixels)[y]
|
||||
for x := 0; x < width; x++ {
|
||||
pixel := row[x]
|
||||
|
||||
luminance := (float64(pixel.R) * 0.299) + (float64(pixel.G) * 0.587) + (float64(pixel.B) * 0.114)
|
||||
var value int
|
||||
if luminance < 128 {
|
||||
value = 0
|
||||
} else {
|
||||
value = 255
|
||||
}
|
||||
|
||||
pixel.R = value
|
||||
pixel.G = value
|
||||
pixel.B = value
|
||||
|
||||
row[x] = pixel
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func removeTransparency(pixels *[][]pixel) {
|
||||
height := len(*pixels)
|
||||
width := len((*pixels)[0])
|
||||
|
||||
for y := 0; y < height; y++ {
|
||||
row := (*pixels)[y]
|
||||
for x := 0; x < width; x++ {
|
||||
pixel := row[x]
|
||||
|
||||
alpha := pixel.A
|
||||
invAlpha := 255 - alpha
|
||||
|
||||
pixel.R = (alpha*pixel.R + invAlpha*255) / 255
|
||||
pixel.G = (alpha*pixel.G + invAlpha*255) / 255
|
||||
pixel.B = (alpha*pixel.B + invAlpha*255) / 255
|
||||
pixel.A = 255
|
||||
|
||||
row[x] = pixel
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func rasterize(printWidth int, printHeight int, pixels *[][]pixel) ([]byte, error) {
|
||||
if printWidth%8 != 0 {
|
||||
return nil, fmt.Errorf("printWidth must be a multiple of 8")
|
||||
}
|
||||
|
||||
if printHeight%8 != 0 {
|
||||
return nil, fmt.Errorf("printHeight must be a multiple of 8")
|
||||
}
|
||||
|
||||
bytes := make([]byte, (printWidth*printHeight)>>3)
|
||||
|
||||
for y := 0; y < printHeight; y++ {
|
||||
for x := 0; x < printWidth; x = x + 8 {
|
||||
i := y*(printWidth>>3) + (x >> 3)
|
||||
bytes[i] =
|
||||
byte((getPixelValue(x+0, y, pixels) << 7) |
|
||||
(getPixelValue(x+1, y, pixels) << 6) |
|
||||
(getPixelValue(x+2, y, pixels) << 5) |
|
||||
(getPixelValue(x+3, y, pixels) << 4) |
|
||||
(getPixelValue(x+4, y, pixels) << 3) |
|
||||
(getPixelValue(x+5, y, pixels) << 2) |
|
||||
(getPixelValue(x+6, y, pixels) << 1) |
|
||||
getPixelValue(x+7, y, pixels))
|
||||
}
|
||||
}
|
||||
|
||||
return bytes, nil
|
||||
}
|
||||
|
||||
func getPixelValue(x int, y int, pixels *[][]pixel) int {
|
||||
row := (*pixels)[y]
|
||||
pixel := row[x]
|
||||
|
||||
if pixel.R > 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
func rgbaToPixel(r uint32, g uint32, b uint32, a uint32) pixel {
|
||||
return pixel{int(r >> 8), int(g >> 8), int(b >> 8), int(a >> 8)}
|
||||
}
|
||||
|
||||
type pixel struct {
|
||||
R int
|
||||
G int
|
||||
B int
|
||||
A int
|
||||
}
|
||||
|
||||
func getPixels(img image.Image) (int, int, [][]pixel) {
|
||||
|
||||
bounds := img.Bounds()
|
||||
width, height := bounds.Max.X, bounds.Max.Y
|
||||
|
||||
var pixels [][]pixel
|
||||
for y := 0; y < height; y++ {
|
||||
var row []pixel
|
||||
for x := 0; x < width; x++ {
|
||||
row = append(row, rgbaToPixel(img.At(x, y).RGBA()))
|
||||
}
|
||||
pixels = append(pixels, row)
|
||||
}
|
||||
|
||||
return width, height, pixels
|
||||
}
|
||||
7
vendor/github.com/justinmichaelvieira/escpos/configs.go
generated
vendored
Normal file
7
vendor/github.com/justinmichaelvieira/escpos/configs.go
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
package escpos
|
||||
|
||||
var (
|
||||
ConfigEpsonTMT20II = PrinterConfig{}
|
||||
ConfigEpsonTMT88II = PrinterConfig{DisableUpsideDown: true}
|
||||
ConfigSOL802 = PrinterConfig{DisableUpsideDown: true}
|
||||
)
|
||||
450
vendor/github.com/justinmichaelvieira/escpos/main.go
generated
vendored
Normal file
450
vendor/github.com/justinmichaelvieira/escpos/main.go
generated
vendored
Normal file
@@ -0,0 +1,450 @@
|
||||
package escpos
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"image"
|
||||
"io"
|
||||
"math"
|
||||
|
||||
"github.com/justinmichaelvieira/iconv"
|
||||
)
|
||||
|
||||
type Style struct {
|
||||
Bold bool
|
||||
Width, Height uint8
|
||||
Reverse bool
|
||||
Underline uint8 // can be 0, 1 or 2
|
||||
UpsideDown bool
|
||||
Rotate bool
|
||||
Justify uint8
|
||||
}
|
||||
|
||||
const (
|
||||
JustifyLeft uint8 = 0
|
||||
JustifyCenter uint8 = 1
|
||||
JustifyRight uint8 = 2
|
||||
QRCodeErrorCorrectionLevelL uint8 = 48
|
||||
QRCodeErrorCorrectionLevelM uint8 = 49
|
||||
QRCodeErrorCorrectionLevelQ uint8 = 50
|
||||
QRCodeErrorCorrectionLevelH uint8 = 51
|
||||
esc byte = 0x1B
|
||||
gs byte = 0x1D
|
||||
fs byte = 0x1C
|
||||
)
|
||||
|
||||
type PrinterConfig struct {
|
||||
DisableUnderline bool
|
||||
DisableBold bool
|
||||
DisableReverse bool
|
||||
DisableRotate bool
|
||||
DisableUpsideDown bool
|
||||
DisableJustify bool
|
||||
}
|
||||
|
||||
type Escpos struct {
|
||||
dst *bufio.Writer
|
||||
Style Style
|
||||
config PrinterConfig
|
||||
}
|
||||
|
||||
// New create an Escpos printer
|
||||
func New(dst io.Writer) (e *Escpos) {
|
||||
e = &Escpos{
|
||||
dst: bufio.NewWriter(dst),
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Sets the Printerconfig
|
||||
func (e *Escpos) SetConfig(conf PrinterConfig) {
|
||||
e.config = conf
|
||||
}
|
||||
|
||||
// Sends the buffered data to the printer
|
||||
func (e *Escpos) Print() error {
|
||||
return e.dst.Flush()
|
||||
}
|
||||
|
||||
// Sends the buffered data to the printer and performs a cut
|
||||
func (e *Escpos) PrintAndCut() error {
|
||||
_, err := e.Cut()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write to buffer: %v", err)
|
||||
}
|
||||
return e.dst.Flush()
|
||||
}
|
||||
|
||||
// WriteRaw write raw bytes to the printer
|
||||
func (e *Escpos) WriteRaw(data []byte) (int, error) {
|
||||
if len(data) > 0 {
|
||||
return e.dst.Write(data)
|
||||
}
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// Stuff for writing text.
|
||||
|
||||
// Writes a string using the predefined options.
|
||||
func (e *Escpos) Write(data string) (int, error) {
|
||||
// we gonna write sum text, so apply the styles!
|
||||
var err error
|
||||
// Bold
|
||||
if !e.config.DisableBold {
|
||||
_, err = e.WriteRaw([]byte{esc, 'E', boolToByte(e.Style.Bold)})
|
||||
if err != nil {
|
||||
// return 0 written bytes here, because technically we did not write any of the bytes of data
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
// Underline
|
||||
if !e.config.DisableUnderline {
|
||||
_, err = e.WriteRaw([]byte{esc, '-', e.Style.Underline})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
// Reverse
|
||||
if !e.config.DisableReverse {
|
||||
_, err = e.WriteRaw([]byte{gs, 'B', boolToByte(e.Style.Reverse)})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// Rotate
|
||||
if !e.config.DisableRotate {
|
||||
_, err = e.WriteRaw([]byte{esc, 'V', boolToByte(e.Style.Rotate)})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// UpsideDown
|
||||
if !e.config.DisableUpsideDown {
|
||||
_, err = e.WriteRaw([]byte{esc, '{', boolToByte(e.Style.UpsideDown)})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
// Justify
|
||||
if !e.config.DisableJustify {
|
||||
_, err = e.WriteRaw([]byte{esc, 'a', e.Style.Justify})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// Width / Height
|
||||
_, err = e.WriteRaw([]byte{gs, '!', ((e.Style.Width - 1) << 4) | (e.Style.Height - 1)})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return e.WriteRaw([]byte(data))
|
||||
}
|
||||
|
||||
// WriteGBK writes a string to the printer using GBK encoding
|
||||
func (e *Escpos) WriteGBK(data string) (int, error) {
|
||||
gbk, err := iconv.ConvertString(data, iconv.GBK, iconv.UTF8)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return e.Write(gbk)
|
||||
}
|
||||
|
||||
// WriteWEU writes a string to the printer using Western European encoding
|
||||
func (e *Escpos) WriteWEU(data string) (int, error) {
|
||||
weu, err := iconv.ConvertString(data, iconv.CP850, iconv.UTF8)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return e.Write(weu)
|
||||
}
|
||||
|
||||
// Sets the printer to print Bold text.
|
||||
func (e *Escpos) Bold(p bool) *Escpos {
|
||||
e.Style.Bold = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Sets the Underline. p can be 0, 1 or 2. It defines the thickness of the underline in dots
|
||||
func (e *Escpos) Underline(p uint8) *Escpos {
|
||||
e.Style.Underline = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Sets Reverse printing. If true the printer will inverse to white text on black background.
|
||||
func (e *Escpos) Reverse(p bool) *Escpos {
|
||||
e.Style.Reverse = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Sets the justification of the text. Possible values are 0, 1 or 2. You can use
|
||||
// JustifyLeft for left alignment
|
||||
// JustifyCenter for center alignment
|
||||
// JustifyRight for right alignment
|
||||
func (e *Escpos) Justify(p uint8) *Escpos {
|
||||
e.Style.Justify = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Toggles 90° CW rotation
|
||||
func (e *Escpos) Rotate(p bool) *Escpos {
|
||||
e.Style.Rotate = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Toggles UpsideDown printing
|
||||
func (e *Escpos) UpsideDown(p bool) *Escpos {
|
||||
e.Style.UpsideDown = p
|
||||
return e
|
||||
}
|
||||
|
||||
// Sets the size of the font. Width and Height should be between 0 and 5. If the value is bigger than 5, 5 is used.
|
||||
func (e *Escpos) Size(width uint8, height uint8) *Escpos {
|
||||
// Values > 5 are not supported by esc/pos, so we'll set 5 as the maximum.
|
||||
if width > 5 {
|
||||
width = 5
|
||||
}
|
||||
if height > 5 {
|
||||
height = 5
|
||||
}
|
||||
e.Style.Width = width
|
||||
e.Style.Height = height
|
||||
return e
|
||||
}
|
||||
|
||||
// Barcode stuff.
|
||||
|
||||
// Sets the position of the HRI characters
|
||||
// 0: Not Printed
|
||||
// 1: Above the bar code
|
||||
// 2: Below the bar code
|
||||
// 3: Both
|
||||
func (e *Escpos) HRIPosition(p uint8) (int, error) {
|
||||
if p > 3 {
|
||||
p = 0
|
||||
}
|
||||
return e.WriteRaw([]byte{gs, 'H', p})
|
||||
}
|
||||
|
||||
// Sets the HRI font to either
|
||||
// false: Font A (12x24) or
|
||||
// true: Font B (9x24)
|
||||
func (e *Escpos) HRIFont(p bool) (int, error) {
|
||||
return e.WriteRaw([]byte{gs, 'f', boolToByte(p)})
|
||||
}
|
||||
|
||||
// Sets the height for a bar code. Default is 162.
|
||||
func (e *Escpos) BarcodeHeight(p uint8) (int, error) {
|
||||
return e.WriteRaw([]byte{gs, 'h', p})
|
||||
}
|
||||
|
||||
// Sets the horizontal size for a bar code. Default is 3. Must be between 2 and 6
|
||||
func (e *Escpos) BarcodeWidth(p uint8) (int, error) {
|
||||
if p < 2 {
|
||||
p = 2
|
||||
}
|
||||
if p > 6 {
|
||||
p = 6
|
||||
}
|
||||
return e.WriteRaw([]byte{gs, 'h', p})
|
||||
}
|
||||
|
||||
// Prints a UPCA Barcode. code can only be numerical characters and must have a length of 11 or 12
|
||||
func (e *Escpos) UPCA(code string) (int, error) {
|
||||
if len(code) != 11 && len(code) != 12 {
|
||||
return 0, fmt.Errorf("code should have a length between 11 and 12")
|
||||
}
|
||||
if !onlyDigits(code) {
|
||||
return 0, fmt.Errorf("code can only contain numerical characters")
|
||||
}
|
||||
byteCode := append([]byte(code), 0)
|
||||
return e.WriteRaw(append([]byte{gs, 'k', 0}, byteCode...))
|
||||
}
|
||||
|
||||
// Prints a UPCE Barcode. code can only be numerical characters and must have a length of 11 or 12
|
||||
func (e *Escpos) UPCE(code string) (int, error) {
|
||||
if len(code) != 11 && len(code) != 12 {
|
||||
return 0, fmt.Errorf("code should have a length between 11 and 12")
|
||||
}
|
||||
if !onlyDigits(code) {
|
||||
return 0, fmt.Errorf("code can only contain numerical characters")
|
||||
}
|
||||
byteCode := append([]byte(code), 0)
|
||||
return e.WriteRaw(append([]byte{gs, 'k', 1}, byteCode...))
|
||||
}
|
||||
|
||||
// Prints a EAN13 Barcode. code can only be numerical characters and must have a length of 12 or 13
|
||||
func (e *Escpos) EAN13(code string) (int, error) {
|
||||
if len(code) != 12 && len(code) != 13 {
|
||||
return 0, fmt.Errorf("code should have a length between 12 and 13")
|
||||
}
|
||||
if !onlyDigits(code) {
|
||||
return 0, fmt.Errorf("code can only contain numerical characters")
|
||||
}
|
||||
byteCode := append([]byte(code), 0)
|
||||
return e.WriteRaw(append([]byte{gs, 'k', 2}, byteCode...))
|
||||
}
|
||||
|
||||
// Prints a EAN8 Barcode. code can only be numerical characters and must have a length of 7 or 8
|
||||
func (e *Escpos) EAN8(code string) (int, error) {
|
||||
if len(code) != 7 && len(code) != 8 {
|
||||
return 0, fmt.Errorf("code should have a length between 7 and 8")
|
||||
}
|
||||
if !onlyDigits(code) {
|
||||
return 0, fmt.Errorf("code can only contain numerical characters")
|
||||
}
|
||||
byteCode := append([]byte(code), 0)
|
||||
return e.WriteRaw(append([]byte{gs, 'k', 3}, byteCode...))
|
||||
}
|
||||
|
||||
// TODO:
|
||||
// CODE39, ITF, CODABAR
|
||||
|
||||
// Prints a QR Code.
|
||||
// code specifies the data to be printed
|
||||
// model specifies the qr code model. false for model 1, true for model 2
|
||||
// size specifies the size in dots. It needs to be between 1 and 16
|
||||
func (e *Escpos) QRCode(code string, model bool, size uint8, correctionLevel uint8) (int, error) {
|
||||
if len(code) > 7089 {
|
||||
return 0, fmt.Errorf("the code is too long, it's length should be smaller than 7090")
|
||||
}
|
||||
if size < 1 {
|
||||
size = 1
|
||||
}
|
||||
if size > 16 {
|
||||
size = 16
|
||||
}
|
||||
var m byte = 49
|
||||
var err error
|
||||
// set the qr code model
|
||||
if model {
|
||||
m = 50
|
||||
}
|
||||
_, err = e.WriteRaw([]byte{gs, '(', 'k', 4, 0, 49, 65, m, 0})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// set the qr code size
|
||||
_, err = e.WriteRaw([]byte{gs, '(', 'k', 3, 0, 49, 67, size})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// set the qr code error correction level
|
||||
if correctionLevel < 48 {
|
||||
correctionLevel = 48
|
||||
}
|
||||
if correctionLevel > 51 {
|
||||
correctionLevel = 51
|
||||
}
|
||||
_, err = e.WriteRaw([]byte{gs, '(', 'k', 3, 0, 49, 69, size})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// store the data in the buffer
|
||||
// we now write stuff to the printer, so lets save it for returning
|
||||
|
||||
// pL and pH define the size of the data. Data ranges from 1 to (pL + pH*256)-3
|
||||
// 3 < pL + pH*256 < 7093
|
||||
var codeLength = len(code) + 3
|
||||
var pL, pH byte
|
||||
pH = byte(int(math.Floor(float64(codeLength) / 256)))
|
||||
pL = byte(codeLength - 256*int(pH))
|
||||
|
||||
written, err := e.WriteRaw(append([]byte{gs, '(', 'k', pL, pH, 49, 80, 48}, []byte(code)...))
|
||||
if err != nil {
|
||||
return written, err
|
||||
}
|
||||
|
||||
// finally print the buffer
|
||||
_, err = e.WriteRaw([]byte{gs, '(', 'k', 3, 0, 49, 81, 48})
|
||||
if err != nil {
|
||||
return written, err
|
||||
}
|
||||
|
||||
return written, nil
|
||||
}
|
||||
|
||||
// todo PDF417
|
||||
//func (e *Escpos) PDF417() (int, error) {
|
||||
//
|
||||
//}
|
||||
|
||||
// Image stuff.
|
||||
// todo.
|
||||
|
||||
// Prints an image
|
||||
func (e *Escpos) PrintImage(image image.Image) (int, error) {
|
||||
xL, xH, yL, yH, data := printImage(image)
|
||||
return e.WriteRaw(append([]byte{gs, 'v', 48, 0, xL, xH, yL, yH}, data...))
|
||||
}
|
||||
|
||||
// Print a predefined bit image with index p and mode mode
|
||||
func (e *Escpos) PrintNVBitImage(p uint8, mode uint8) (int, error) {
|
||||
if p == 0 {
|
||||
return 0, fmt.Errorf("start index of nv bit images start at 1")
|
||||
}
|
||||
if mode > 3 {
|
||||
return 0, fmt.Errorf("mode only supports values from 0 to 3")
|
||||
}
|
||||
|
||||
return e.WriteRaw([]byte{fs, 'd', p, mode})
|
||||
}
|
||||
|
||||
// Configuration stuff
|
||||
|
||||
// Sends a newline to the printer.
|
||||
func (e *Escpos) LineFeed() (int, error) {
|
||||
return e.Write("\n")
|
||||
}
|
||||
|
||||
// According to command manual this prints and feeds the paper p*line spacing.
|
||||
func (e *Escpos) LineFeedD(p uint8) (int, error) {
|
||||
return e.WriteRaw([]byte{esc, 'd', p})
|
||||
}
|
||||
|
||||
// Sets the line spacing to the default. According to command manual this is 1/6 inch
|
||||
func (e *Escpos) DefaultLineSpacing() (int, error) {
|
||||
return e.WriteRaw([]byte{esc, '2'})
|
||||
}
|
||||
|
||||
// Sets the line spacing to multiples of the "horizontal and vertical motion units".. Those can be set with MotionUnits
|
||||
func (e *Escpos) LineSpacing(p uint8) (int, error) {
|
||||
return e.WriteRaw([]byte{esc, '3', p})
|
||||
}
|
||||
|
||||
// Initializes the printer to the settings it had when turned on
|
||||
func (e *Escpos) Initialize() (int, error) {
|
||||
return e.WriteRaw([]byte{esc, '@'})
|
||||
}
|
||||
|
||||
// Sets the horizontal (x) and vertical (y) motion units to 1/x inch and 1/y inch. Well... According to the manual anyway. You may not want to use this, as it does not seem to do the same on an Epson TM-20II
|
||||
func (e *Escpos) MotionUnits(x, y uint8) (int, error) {
|
||||
return e.WriteRaw([]byte{gs, 'P', x, y})
|
||||
}
|
||||
|
||||
// Feeds the paper to the end and performs a Cut. In the ESC/POS Command Manual there is also PartialCut and FullCut documented, but it does exactly the same.
|
||||
func (e *Escpos) Cut() (int, error) {
|
||||
return e.WriteRaw([]byte{gs, 'V', 'A', 0x00})
|
||||
}
|
||||
|
||||
// Helpers
|
||||
func boolToByte(b bool) byte {
|
||||
if b {
|
||||
return 0x01
|
||||
}
|
||||
return 0x00
|
||||
}
|
||||
func onlyDigits(s string) bool {
|
||||
for _, c := range s {
|
||||
if c < '0' || c > '9' {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user