This commit is contained in:
Кобелев Андрей Андреевич
2022-08-15 23:06:20 +03:00
commit 4ea1a3802b
532 changed files with 211522 additions and 0 deletions

21
vendor/github.com/asdine/storm/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) [2017] [Asdine El Hrychy]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

35
vendor/github.com/asdine/storm/codec/gob/gob.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
// Package gob contains a codec to encode and decode entities in Gob format
package gob
import (
"bytes"
"encoding/gob"
)
const name = "gob"
// Codec serializing objects using the gob package.
// See https://golang.org/pkg/encoding/gob/
var Codec = new(gobCodec)
type gobCodec int
func (c gobCodec) Marshal(v interface{}) ([]byte, error) {
var b bytes.Buffer
enc := gob.NewEncoder(&b)
err := enc.Encode(v)
if err != nil {
return nil, err
}
return b.Bytes(), nil
}
func (c gobCodec) Unmarshal(b []byte, v interface{}) error {
r := bytes.NewReader(b)
dec := gob.NewDecoder(r)
return dec.Decode(v)
}
func (c gobCodec) Name() string {
return name
}

32
vendor/github.com/asdine/storm/v3/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,32 @@
# IDE
.idea/
.vscode/
*.iml
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
# Golang vendor folder
/vendor/

19
vendor/github.com/asdine/storm/v3/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,19 @@
language: go
before_install:
- go get github.com/stretchr/testify
env: GO111MODULE=on
go:
- "1.13.x"
- "1.14.x"
- tip
matrix:
allow_failures:
- go: tip
script:
- go mod vendor
- go test -mod vendor -race -v ./...

21
vendor/github.com/asdine/storm/v3/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) [2017] [Asdine El Hrychy]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

643
vendor/github.com/asdine/storm/v3/README.md generated vendored Normal file
View File

@@ -0,0 +1,643 @@
# Storm
[![Build Status](https://travis-ci.org/asdine/storm.svg)](https://travis-ci.org/asdine/storm)
[![GoDoc](https://godoc.org/github.com/asdine/storm?status.svg)](https://godoc.org/github.com/asdine/storm)
Storm is a simple and powerful toolkit for [BoltDB](https://github.com/coreos/bbolt). Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.
In addition to the examples below, see also the [examples in the GoDoc](https://godoc.org/github.com/asdine/storm#pkg-examples).
_For extended queries and support for [Badger](https://github.com/dgraph-io/badger), see also [Genji](https://github.com/asdine/genji)_
## Table of Contents
- [Getting Started](#getting-started)
- [Import Storm](#import-storm)
- [Open a database](#open-a-database)
- [Simple CRUD system](#simple-crud-system)
- [Declare your structures](#declare-your-structures)
- [Save your object](#save-your-object)
- [Auto Increment](#auto-increment)
- [Simple queries](#simple-queries)
- [Fetch one object](#fetch-one-object)
- [Fetch multiple objects](#fetch-multiple-objects)
- [Fetch all objects](#fetch-all-objects)
- [Fetch all objects sorted by index](#fetch-all-objects-sorted-by-index)
- [Fetch a range of objects](#fetch-a-range-of-objects)
- [Fetch objects by prefix](#fetch-objects-by-prefix)
- [Skip, Limit and Reverse](#skip-limit-and-reverse)
- [Delete an object](#delete-an-object)
- [Update an object](#update-an-object)
- [Initialize buckets and indexes before saving an object](#initialize-buckets-and-indexes-before-saving-an-object)
- [Drop a bucket](#drop-a-bucket)
- [Re-index a bucket](#re-index-a-bucket)
- [Advanced queries](#advanced-queries)
- [Transactions](#transactions)
- [Options](#options)
- [BoltOptions](#boltoptions)
- [MarshalUnmarshaler](#marshalunmarshaler)
- [Provided Codecs](#provided-codecs)
- [Use existing Bolt connection](#use-existing-bolt-connection)
- [Batch mode](#batch-mode)
- [Nodes and nested buckets](#nodes-and-nested-buckets)
- [Node options](#node-options)
- [Simple Key/Value store](#simple-keyvalue-store)
- [BoltDB](#boltdb)
- [License](#license)
- [Credits](#credits)
## Getting Started
```bash
GO111MODULE=on go get -u github.com/asdine/storm/v3
```
## Import Storm
```go
import "github.com/asdine/storm/v3"
```
## Open a database
Quick way of opening a database
```go
db, err := storm.Open("my.db")
defer db.Close()
```
`Open` can receive multiple options to customize the way it behaves. See [Options](#options) below
## Simple CRUD system
### Declare your structures
```go
type User struct {
ID int // primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
Age int `storm:"index"`
}
```
The primary key can be of any type as long as it is not a zero value. Storm will search for the tag `id`, if not present Storm will search for a field named `ID`.
```go
type User struct {
ThePrimaryKey string `storm:"id"`// primary key
Group string `storm:"index"` // this field will be indexed
Email string `storm:"unique"` // this field will be indexed with a unique constraint
Name string // this field will not be indexed
}
```
Storm handles tags in nested structures with the `inline` tag
```go
type Base struct {
Ident bson.ObjectId `storm:"id"`
}
type User struct {
Base `storm:"inline"`
Group string `storm:"index"`
Email string `storm:"unique"`
Name string
CreatedAt time.Time `storm:"index"`
}
```
### Save your object
```go
user := User{
ID: 10,
Group: "staff",
Email: "john@provider.com",
Name: "John",
Age: 21,
CreatedAt: time.Now(),
}
err := db.Save(&user)
// err == nil
user.ID++
err = db.Save(&user)
// err == storm.ErrAlreadyExists
```
That's it.
`Save` creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.
#### Auto Increment
Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.
```go
type Product struct {
Pk int `storm:"id,increment"` // primary key with auto increment
Name string
IntegerField uint64 `storm:"increment"`
IndexedIntegerField uint32 `storm:"index,increment"`
UniqueIntegerField int16 `storm:"unique,increment=100"` // the starting value can be set
}
p := Product{Name: "Vaccum Cleaner"}
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 0
// 0
// 0
// 0
_ = db.Save(&p)
fmt.Println(p.Pk)
fmt.Println(p.IntegerField)
fmt.Println(p.IndexedIntegerField)
fmt.Println(p.UniqueIntegerField)
// 1
// 1
// 1
// 100
```
### Simple queries
Any object can be fetched, indexed or not. Storm uses indexes when available, otherwise it uses the [query system](#advanced-queries).
#### Fetch one object
```go
var user User
err := db.One("Email", "john@provider.com", &user)
// err == nil
err = db.One("Name", "John", &user)
// err == nil
err = db.One("Name", "Jack", &user)
// err == storm.ErrNotFound
```
#### Fetch multiple objects
```go
var users []User
err := db.Find("Group", "staff", &users)
```
#### Fetch all objects
```go
var users []User
err := db.All(&users)
```
#### Fetch all objects sorted by index
```go
var users []User
err := db.AllByIndex("CreatedAt", &users)
```
#### Fetch a range of objects
```go
var users []User
err := db.Range("Age", 10, 21, &users)
```
#### Fetch objects by prefix
```go
var users []User
err := db.Prefix("Name", "Jo", &users)
```
#### Skip, Limit and Reverse
```go
var users []User
err := db.Find("Group", "staff", &users, storm.Skip(10))
err = db.Find("Group", "staff", &users, storm.Limit(10))
err = db.Find("Group", "staff", &users, storm.Reverse())
err = db.Find("Group", "staff", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.All(&users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.AllByIndex("CreatedAt", &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
err = db.Range("Age", 10, 21, &users, storm.Limit(10), storm.Skip(10), storm.Reverse())
```
#### Delete an object
```go
err := db.DeleteStruct(&user)
```
#### Update an object
```go
// Update multiple fields
err := db.Update(&User{ID: 10, Name: "Jack", Age: 45})
// Update a single field
err := db.UpdateField(&User{ID: 10}, "Age", 0)
```
#### Initialize buckets and indexes before saving an object
```go
err := db.Init(&User{})
```
Useful when starting your application
#### Drop a bucket
Using the struct
```go
err := db.Drop(&User)
```
Using the bucket name
```go
err := db.Drop("User")
```
#### Re-index a bucket
```go
err := db.ReIndex(&User{})
```
Useful when the structure has changed
### Advanced queries
For more complex queries, you can use the `Select` method.
`Select` takes any number of [`Matcher`](https://godoc.org/github.com/asdine/storm/q#Matcher) from the [`q`](https://godoc.org/github.com/asdine/storm/q) package.
Here are some common Matchers:
```go
// Equality
q.Eq("Name", John)
// Strictly greater than
q.Gt("Age", 7)
// Lesser than or equal to
q.Lte("Age", 77)
// Regex with name that starts with the letter D
q.Re("Name", "^D")
// In the given slice of values
q.In("Group", []string{"Staff", "Admin"})
// Comparing fields
q.EqF("FieldName", "SecondFieldName")
q.LtF("FieldName", "SecondFieldName")
q.GtF("FieldName", "SecondFieldName")
q.LteF("FieldName", "SecondFieldName")
q.GteF("FieldName", "SecondFieldName")
```
Matchers can also be combined with `And`, `Or` and `Not`:
```go
// Match if all match
q.And(
q.Gt("Age", 7),
q.Re("Name", "^D")
)
// Match if one matches
q.Or(
q.Re("Name", "^A"),
q.Not(
q.Re("Name", "^B")
),
q.Re("Name", "^C"),
q.In("Group", []string{"Staff", "Admin"}),
q.And(
q.StrictEq("Password", []byte(password)),
q.Eq("Registered", true)
)
)
```
You can find the complete list in the [documentation](https://godoc.org/github.com/asdine/storm/q#Matcher).
`Select` takes any number of matchers and wraps them into a `q.And()` so it's not necessary to specify it. It returns a [`Query`](https://godoc.org/github.com/asdine/storm#Query) type.
```go
query := db.Select(q.Gte("Age", 7), q.Lte("Age", 77))
```
The `Query` type contains methods to filter and order the records.
```go
// Limit
query = query.Limit(10)
// Skip
query = query.Skip(20)
// Calls can also be chained
query = query.Limit(10).Skip(20).OrderBy("Age").Reverse()
```
But also to specify how to fetch them.
```go
var users []User
err = query.Find(&users)
var user User
err = query.First(&user)
```
Examples with `Select`:
```go
// Find all users with an ID between 10 and 100
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Find(&users)
// Nested matchers
err = db.Select(q.Or(
q.Gt("ID", 50),
q.Lt("Age", 21),
q.And(
q.Eq("Group", "admin"),
q.Gte("Age", 21),
),
)).Find(&users)
query := db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name")
// Find multiple records
err = query.Find(&users)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").Find(&users)
// Find first record
err = query.First(&user)
// or
err = db.Select(q.Gte("ID", 10), q.Lte("ID", 100)).Limit(10).Skip(5).Reverse().OrderBy("Age", "Name").First(&user)
// Delete all matching records
err = query.Delete(new(User))
// Fetching records one by one (useful when the bucket contains a lot of records)
query = db.Select(q.Gte("ID", 10),q.Lte("ID", 100)).OrderBy("Age", "Name")
err = query.Each(new(User), func(record interface{}) error) {
u := record.(*User)
...
return nil
}
```
See the [documentation](https://godoc.org/github.com/asdine/storm#Query) for a complete list of methods.
### Transactions
```go
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
accountA.Amount -= 100
accountB.Amount += 100
err = tx.Save(accountA)
if err != nil {
return err
}
err = tx.Save(accountB)
if err != nil {
return err
}
return tx.Commit()
```
### Options
Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.
#### BoltOptions
By default, Storm opens a database with the mode `0600` and a timeout of one second.
You can change this behavior by using `BoltOptions`
```go
db, err := storm.Open("my.db", storm.BoltOptions(0600, &bolt.Options{Timeout: 1 * time.Second}))
```
#### MarshalUnmarshaler
To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implements [`codec.MarshalUnmarshaler`](https://godoc.org/github.com/asdine/storm/codec#MarshalUnmarshaler) via the [`storm.Codec`](https://godoc.org/github.com/asdine/storm#Codec) option:
```go
db := storm.Open("my.db", storm.Codec(myCodec))
```
##### Provided Codecs
You can easily implement your own `MarshalUnmarshaler`, but Storm comes with built-in support for [JSON](https://godoc.org/github.com/asdine/storm/codec/json) (default), [GOB](https://godoc.org/github.com/asdine/storm/codec/gob), [Sereal](https://godoc.org/github.com/asdine/storm/codec/sereal), [Protocol Buffers](https://godoc.org/github.com/asdine/storm/codec/protobuf) and [MessagePack](https://godoc.org/github.com/asdine/storm/codec/msgpack).
These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):
```go
import (
"github.com/asdine/storm/v3"
"github.com/asdine/storm/v3/codec/gob"
"github.com/asdine/storm/v3/codec/json"
"github.com/asdine/storm/v3/codec/sereal"
"github.com/asdine/storm/v3/codec/protobuf"
"github.com/asdine/storm/v3/codec/msgpack"
)
var gobDb, _ = storm.Open("gob.db", storm.Codec(gob.Codec))
var jsonDb, _ = storm.Open("json.db", storm.Codec(json.Codec))
var serealDb, _ = storm.Open("sereal.db", storm.Codec(sereal.Codec))
var protobufDb, _ = storm.Open("protobuf.db", storm.Codec(protobuf.Codec))
var msgpackDb, _ = storm.Open("msgpack.db", storm.Codec(msgpack.Codec))
```
**Tip**: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to use [this tool](https://github.com/favadi/protoc-go-inject-tag) to inject the tags during the compilation.
#### Use existing Bolt connection
You can use an existing connection and pass it to Storm
```go
bDB, _ := bolt.Open(filepath.Join(dir, "bolt.db"), 0600, &bolt.Options{Timeout: 10 * time.Second})
db := storm.Open("my.db", storm.UseDB(bDB))
```
#### Batch mode
Batch mode can be enabled to speed up concurrent writes (see [Batch read-write transactions](https://github.com/coreos/bbolt#batch-read-write-transactions))
```go
db := storm.Open("my.db", storm.Batch())
```
## Nodes and nested buckets
Storm takes advantage of BoltDB nested buckets feature by using `storm.Node`.
A `storm.Node` is the underlying object used by `storm.DB` to manipulate a bucket.
To create a nested bucket and use the same API as `storm.DB`, you can use the `DB.From` method.
```go
repo := db.From("repo")
err := repo.Save(&Issue{
Title: "I want more features",
Author: user.ID,
})
err = repo.Save(newRelease("0.10"))
var issues []Issue
err = repo.Find("Author", user.ID, &issues)
var release Release
err = repo.One("Tag", "0.10", &release)
```
You can also chain the nodes to create a hierarchy
```go
chars := db.From("characters")
heroes := chars.From("heroes")
enemies := chars.From("enemies")
items := db.From("items")
potions := items.From("consumables").From("medicine").From("potions")
```
You can even pass the entire hierarchy as arguments to `From`:
```go
privateNotes := db.From("notes", "private")
workNotes := db.From("notes", "work")
```
### Node options
A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.
```go
n := db.From("my-node")
```
Give a bolt.Tx transaction to the Node
```go
n = n.WithTransaction(tx)
```
Enable batch mode
```go
n = n.WithBatch(true)
```
Use a Codec
```go
n = n.WithCodec(gob.Codec)
```
## Simple Key/Value store
Storm can be used as a simple, robust, key/value store that can store anything.
The key and the value can be of any type as long as the key is not a zero value.
Saving data :
```go
db.Set("logs", time.Now(), "I'm eating my breakfast man")
db.Set("sessions", bson.NewObjectId(), &someUser)
db.Set("weird storage", "754-3010", map[string]interface{}{
"hair": "blonde",
"likes": []string{"cheese", "star wars"},
})
```
Fetching data :
```go
user := User{}
db.Get("sessions", someObjectId, &user)
var details map[string]interface{}
db.Get("weird storage", "754-3010", &details)
db.Get("sessions", someObjectId, &details)
```
Deleting data :
```go
db.Delete("sessions", someObjectId)
db.Delete("weird storage", "754-3010")
```
You can find other useful methods in the [documentation](https://godoc.org/github.com/asdine/storm#KeyValueStore).
## BoltDB
BoltDB is still easily accessible and can be used as usual
```go
db.Bolt.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket([]byte("my bucket"))
val := bucket.Get([]byte("any id"))
fmt.Println(string(val))
return nil
})
```
A transaction can be also be passed to Storm
```go
db.Bolt.Update(func(tx *bolt.Tx) error {
...
dbx := db.WithTransaction(tx)
err = dbx.Save(&user)
...
return nil
})
```
## License
MIT
## Credits
- [Asdine El Hrychy](https://github.com/asdine)
- [Bjørn Erik Pedersen](https://github.com/bep)

47
vendor/github.com/asdine/storm/v3/bucket.go generated vendored Normal file
View File

@@ -0,0 +1,47 @@
package storm
import bolt "go.etcd.io/bbolt"
// CreateBucketIfNotExists creates the bucket below the current node if it doesn't
// already exist.
func (n *node) CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error) {
var b *bolt.Bucket
var err error
bucketNames := append(n.rootBucket, bucket)
for _, bucketName := range bucketNames {
if b != nil {
if b, err = b.CreateBucketIfNotExists([]byte(bucketName)); err != nil {
return nil, err
}
} else {
if b, err = tx.CreateBucketIfNotExists([]byte(bucketName)); err != nil {
return nil, err
}
}
}
return b, nil
}
// GetBucket returns the given bucket below the current node.
func (n *node) GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket {
var b *bolt.Bucket
bucketNames := append(n.rootBucket, children...)
for _, bucketName := range bucketNames {
if b != nil {
if b = b.Bucket([]byte(bucketName)); b == nil {
return nil
}
} else {
if b = tx.Bucket([]byte(bucketName)); b == nil {
return nil
}
}
}
return b
}

1
vendor/github.com/asdine/storm/v3/codec/.gitignore generated vendored Normal file
View File

@@ -0,0 +1 @@
*.db

11
vendor/github.com/asdine/storm/v3/codec/codec.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
// Package codec contains sub-packages with different codecs that can be used
// to encode and decode entities in Storm.
package codec
// MarshalUnmarshaler represents a codec used to marshal and unmarshal entities.
type MarshalUnmarshaler interface {
Marshal(v interface{}) ([]byte, error)
Unmarshal(b []byte, v interface{}) error
// name of this codec
Name() string
}

25
vendor/github.com/asdine/storm/v3/codec/json/json.go generated vendored Normal file
View File

@@ -0,0 +1,25 @@
// Package json contains a codec to encode and decode entities in JSON format
package json
import (
"encoding/json"
)
const name = "json"
// Codec that encodes to and decodes from JSON.
var Codec = new(jsonCodec)
type jsonCodec int
func (j jsonCodec) Marshal(v interface{}) ([]byte, error) {
return json.Marshal(v)
}
func (j jsonCodec) Unmarshal(b []byte, v interface{}) error {
return json.Unmarshal(b, v)
}
func (j jsonCodec) Name() string {
return name
}

51
vendor/github.com/asdine/storm/v3/errors.go generated vendored Normal file
View File

@@ -0,0 +1,51 @@
package storm
import "errors"
// Errors
var (
// ErrNoID is returned when no ID field or id tag is found in the struct.
ErrNoID = errors.New("missing struct tag id or ID field")
// ErrZeroID is returned when the ID field is a zero value.
ErrZeroID = errors.New("id field must not be a zero value")
// ErrBadType is returned when a method receives an unexpected value type.
ErrBadType = errors.New("provided data must be a struct or a pointer to struct")
// ErrAlreadyExists is returned uses when trying to set an existing value on a field that has a unique index.
ErrAlreadyExists = errors.New("already exists")
// ErrNilParam is returned when the specified param is expected to be not nil.
ErrNilParam = errors.New("param must not be nil")
// ErrUnknownTag is returned when an unexpected tag is specified.
ErrUnknownTag = errors.New("unknown tag")
// ErrIdxNotFound is returned when the specified index is not found.
ErrIdxNotFound = errors.New("index not found")
// ErrSlicePtrNeeded is returned when an unexpected value is given, instead of a pointer to slice.
ErrSlicePtrNeeded = errors.New("provided target must be a pointer to slice")
// ErrStructPtrNeeded is returned when an unexpected value is given, instead of a pointer to struct.
ErrStructPtrNeeded = errors.New("provided target must be a pointer to struct")
// ErrPtrNeeded is returned when an unexpected value is given, instead of a pointer.
ErrPtrNeeded = errors.New("provided target must be a pointer to a valid variable")
// ErrNoName is returned when the specified struct has no name.
ErrNoName = errors.New("provided target must have a name")
// ErrNotFound is returned when the specified record is not saved in the bucket.
ErrNotFound = errors.New("not found")
// ErrNotInTransaction is returned when trying to rollback or commit when not in transaction.
ErrNotInTransaction = errors.New("not in transaction")
// ErrIncompatibleValue is returned when trying to set a value with a different type than the chosen field
ErrIncompatibleValue = errors.New("incompatible value")
// ErrDifferentCodec is returned when using a codec different than the first codec used with the bucket.
ErrDifferentCodec = errors.New("the selected codec is incompatible with this bucket")
)

226
vendor/github.com/asdine/storm/v3/extract.go generated vendored Normal file
View File

@@ -0,0 +1,226 @@
package storm
import (
"fmt"
"reflect"
"strconv"
"strings"
"github.com/asdine/storm/v3/index"
bolt "go.etcd.io/bbolt"
)
// Storm tags
const (
tagID = "id"
tagIdx = "index"
tagUniqueIdx = "unique"
tagInline = "inline"
tagIncrement = "increment"
indexPrefix = "__storm_index_"
)
type fieldConfig struct {
Name string
Index string
IsZero bool
IsID bool
Increment bool
IncrementStart int64
IsInteger bool
Value *reflect.Value
ForceUpdate bool
}
// structConfig is a structure gathering all the relevant informations about a model
type structConfig struct {
Name string
Fields map[string]*fieldConfig
ID *fieldConfig
}
func extract(s *reflect.Value, mi ...*structConfig) (*structConfig, error) {
if s.Kind() == reflect.Ptr {
e := s.Elem()
s = &e
}
if s.Kind() != reflect.Struct {
return nil, ErrBadType
}
typ := s.Type()
var child bool
var m *structConfig
if len(mi) > 0 {
m = mi[0]
child = true
} else {
m = &structConfig{}
m.Fields = make(map[string]*fieldConfig)
}
if m.Name == "" {
m.Name = typ.Name()
}
numFields := s.NumField()
for i := 0; i < numFields; i++ {
field := typ.Field(i)
value := s.Field(i)
if field.PkgPath != "" {
continue
}
err := extractField(&value, &field, m, child)
if err != nil {
return nil, err
}
}
if child {
return m, nil
}
if m.ID == nil {
return nil, ErrNoID
}
if m.Name == "" {
return nil, ErrNoName
}
return m, nil
}
func extractField(value *reflect.Value, field *reflect.StructField, m *structConfig, isChild bool) error {
var f *fieldConfig
var err error
tag := field.Tag.Get("storm")
if tag != "" {
f = &fieldConfig{
Name: field.Name,
IsZero: isZero(value),
IsInteger: isInteger(value),
Value: value,
IncrementStart: 1,
}
tags := strings.Split(tag, ",")
for _, tag := range tags {
switch tag {
case "id":
f.IsID = true
f.Index = tagUniqueIdx
case tagUniqueIdx, tagIdx:
f.Index = tag
case tagInline:
if value.Kind() == reflect.Ptr {
e := value.Elem()
value = &e
}
if value.Kind() == reflect.Struct {
a := value.Addr()
_, err := extract(&a, m)
if err != nil {
return err
}
}
// we don't need to save this field
return nil
default:
if strings.HasPrefix(tag, tagIncrement) {
f.Increment = true
parts := strings.Split(tag, "=")
if parts[0] != tagIncrement {
return ErrUnknownTag
}
if len(parts) > 1 {
f.IncrementStart, err = strconv.ParseInt(parts[1], 0, 64)
if err != nil {
return err
}
}
} else {
return ErrUnknownTag
}
}
}
if _, ok := m.Fields[f.Name]; !ok || !isChild {
m.Fields[f.Name] = f
}
}
if m.ID == nil && f != nil && f.IsID {
m.ID = f
}
// the field is named ID and no ID field has been detected before
if m.ID == nil && field.Name == "ID" {
if f == nil {
f = &fieldConfig{
Index: tagUniqueIdx,
Name: field.Name,
IsZero: isZero(value),
IsInteger: isInteger(value),
IsID: true,
Value: value,
IncrementStart: 1,
}
m.Fields[field.Name] = f
}
m.ID = f
}
return nil
}
func extractSingleField(ref *reflect.Value, fieldName string) (*structConfig, error) {
var cfg structConfig
cfg.Fields = make(map[string]*fieldConfig)
f, ok := ref.Type().FieldByName(fieldName)
if !ok || f.PkgPath != "" {
return nil, fmt.Errorf("field %s not found", fieldName)
}
v := ref.FieldByName(fieldName)
err := extractField(&v, &f, &cfg, false)
if err != nil {
return nil, err
}
return &cfg, nil
}
func getIndex(bucket *bolt.Bucket, idxKind string, fieldName string) (index.Index, error) {
var idx index.Index
var err error
switch idxKind {
case tagUniqueIdx:
idx, err = index.NewUniqueIndex(bucket, []byte(indexPrefix+fieldName))
case tagIdx:
idx, err = index.NewListIndex(bucket, []byte(indexPrefix+fieldName))
default:
err = ErrIdxNotFound
}
return idx, err
}
func isZero(v *reflect.Value) bool {
zero := reflect.Zero(v.Type()).Interface()
current := v.Interface()
return reflect.DeepEqual(current, zero)
}
func isInteger(v *reflect.Value) bool {
kind := v.Kind()
return v != nil && kind >= reflect.Int && kind <= reflect.Uint64
}

499
vendor/github.com/asdine/storm/v3/finder.go generated vendored Normal file
View File

@@ -0,0 +1,499 @@
package storm
import (
"fmt"
"reflect"
"github.com/asdine/storm/v3/index"
"github.com/asdine/storm/v3/q"
bolt "go.etcd.io/bbolt"
)
// A Finder can fetch types from BoltDB.
type Finder interface {
// One returns one record by the specified index
One(fieldName string, value interface{}, to interface{}) error
// Find returns one or more records by the specified index
Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error
// AllByIndex gets all the records of a bucket that are indexed in the specified index
AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error
// All gets all the records of a bucket.
// If there are no records it returns no error and the 'to' parameter is set to an empty slice.
All(to interface{}, options ...func(*index.Options)) error
// Select a list of records that match a list of matchers. Doesn't use indexes.
Select(matchers ...q.Matcher) Query
// Range returns one or more records by the specified index within the specified range
Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error
// Prefix returns one or more records whose given field starts with the specified prefix.
Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error
// Count counts all the records of a bucket
Count(data interface{}) (int, error)
}
// One returns one record by the specified index
func (n *node) One(fieldName string, value interface{}, to interface{}) error {
sink, err := newFirstSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
if fieldName == "" {
return ErrNotFound
}
ref := reflect.Indirect(sink.ref)
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.StrictEq(fieldName, value))
query.Limit(1)
if n.tx != nil {
err = query.query(n.tx, sink)
} else {
err = n.s.Bolt.View(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
}
if err != nil {
return err
}
return sink.flush()
}
val, err := toBytes(value, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.one(tx, bucketName, fieldName, cfg, to, val, field.IsID)
})
}
func (n *node) one(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, to interface{}, val []byte, skipIndex bool) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
var id []byte
if !skipIndex {
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
id = idx.Get(val)
} else {
id = val
}
if id == nil {
return ErrNotFound
}
raw := bucket.Get(id)
if raw == nil {
return ErrNotFound
}
return n.codec.Unmarshal(raw, to)
}
// Find returns one or more records by the specified index
func (n *node) Find(fieldName string, value interface{}, to interface{}, options ...func(q *index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && (field.Index == "" || value == nil)) {
query := newQuery(n, q.Eq(fieldName, value))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
val, err := toBytes(value, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.find(tx, bucketName, fieldName, cfg, sink, val, opts)
})
}
func (n *node) find(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, val []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.All(val, opts)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// AllByIndex gets all the records of a bucket that are indexed in the specified index
func (n *node) AllByIndex(fieldName string, to interface{}, options ...func(*index.Options)) error {
if fieldName == "" {
return n.All(to, options...)
}
ref := reflect.ValueOf(to)
if ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Slice {
return ErrSlicePtrNeeded
}
typ := reflect.Indirect(ref).Type().Elem()
if typ.Kind() == reflect.Ptr {
typ = typ.Elem()
}
newElem := reflect.New(typ)
cfg, err := extract(&newElem)
if err != nil {
return err
}
if cfg.ID.Name == fieldName {
return n.All(to, options...)
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
return n.readTx(func(tx *bolt.Tx) error {
return n.allByIndex(tx, fieldName, cfg, &ref, opts)
})
}
func (n *node) allByIndex(tx *bolt.Tx, fieldName string, cfg *structConfig, ref *reflect.Value, opts *index.Options) error {
bucket := n.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
fieldCfg, ok := cfg.Fields[fieldName]
if !ok {
return ErrNotFound
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
list, err := idx.AllRecords(opts)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
results := reflect.MakeSlice(reflect.Indirect(*ref).Type(), len(list), len(list))
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
err = n.codec.Unmarshal(raw, results.Index(i).Addr().Interface())
if err != nil {
return err
}
}
reflect.Indirect(*ref).Set(results)
return nil
}
// All gets all the records of a bucket.
// If there are no records it returns no error and the 'to' parameter is set to an empty slice.
func (n *node) All(to interface{}, options ...func(*index.Options)) error {
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
query := newQuery(n, nil).Limit(opts.Limit).Skip(opts.Skip)
if opts.Reverse {
query.Reverse()
}
err := query.Find(to)
if err != nil && err != ErrNotFound {
return err
}
if err == ErrNotFound {
ref := reflect.ValueOf(to)
results := reflect.MakeSlice(reflect.Indirect(ref).Type(), 0, 0)
reflect.Indirect(ref).Set(results)
}
return nil
}
// Range returns one or more records by the specified index within the specified range
func (n *node) Range(fieldName string, min, max, to interface{}, options ...func(*index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.And(q.Gte(fieldName, min), q.Lte(fieldName, max)))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
mn, err := toBytes(min, n.codec)
if err != nil {
return err
}
mx, err := toBytes(max, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.rnge(tx, bucketName, fieldName, cfg, sink, mn, mx, opts)
})
}
func (n *node) rnge(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, min, max []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
reflect.Indirect(sink.ref).SetLen(0)
return nil
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.Range(min, max, opts)
if err != nil {
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// Prefix returns one or more records whose given field starts with the specified prefix.
func (n *node) Prefix(fieldName string, prefix string, to interface{}, options ...func(*index.Options)) error {
sink, err := newListSink(n, to)
if err != nil {
return err
}
bucketName := sink.bucketName()
if bucketName == "" {
return ErrNoName
}
ref := reflect.Indirect(reflect.New(sink.elemType))
cfg, err := extractSingleField(&ref, fieldName)
if err != nil {
return err
}
opts := index.NewOptions()
for _, fn := range options {
fn(opts)
}
field, ok := cfg.Fields[fieldName]
if !ok || (!field.IsID && field.Index == "") {
query := newQuery(n, q.Re(fieldName, fmt.Sprintf("^%s", prefix)))
query.Skip(opts.Skip).Limit(opts.Limit)
if opts.Reverse {
query.Reverse()
}
err = n.readTx(func(tx *bolt.Tx) error {
return query.query(tx, sink)
})
if err != nil {
return err
}
return sink.flush()
}
prfx, err := toBytes(prefix, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
return n.prefix(tx, bucketName, fieldName, cfg, sink, prfx, opts)
})
}
func (n *node) prefix(tx *bolt.Tx, bucketName, fieldName string, cfg *structConfig, sink *listSink, prefix []byte, opts *index.Options) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
reflect.Indirect(sink.ref).SetLen(0)
return nil
}
idx, err := getIndex(bucket, cfg.Fields[fieldName].Index, fieldName)
if err != nil {
return err
}
list, err := idx.Prefix(prefix, opts)
if err != nil {
return err
}
sink.results = reflect.MakeSlice(reflect.Indirect(sink.ref).Type(), len(list), len(list))
sorter := newSorter(n, sink)
for i := range list {
raw := bucket.Get(list[i])
if raw == nil {
return ErrNotFound
}
if _, err := sorter.filter(nil, bucket, list[i], raw); err != nil {
return err
}
}
return sorter.flush()
}
// Count counts all the records of a bucket
func (n *node) Count(data interface{}) (int, error) {
return n.Select().Count(data)
}

14
vendor/github.com/asdine/storm/v3/index/errors.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
package index
import "errors"
var (
// ErrNotFound is returned when the specified record is not saved in the bucket.
ErrNotFound = errors.New("not found")
// ErrAlreadyExists is returned uses when trying to set an existing value on a field that has a unique index.
ErrAlreadyExists = errors.New("already exists")
// ErrNilParam is returned when the specified param is expected to be not nil.
ErrNilParam = errors.New("param must not be nil")
)

14
vendor/github.com/asdine/storm/v3/index/indexes.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
// Package index contains Index engines used to store values and their corresponding IDs
package index
// Index interface
type Index interface {
Add(value []byte, targetID []byte) error
Remove(value []byte) error
RemoveID(id []byte) error
Get(value []byte) []byte
All(value []byte, opts *Options) ([][]byte, error)
AllRecords(opts *Options) ([][]byte, error)
Range(min []byte, max []byte, opts *Options) ([][]byte, error)
Prefix(prefix []byte, opts *Options) ([][]byte, error)
}

283
vendor/github.com/asdine/storm/v3/index/list.go generated vendored Normal file
View File

@@ -0,0 +1,283 @@
package index
import (
"bytes"
"github.com/asdine/storm/v3/internal"
bolt "go.etcd.io/bbolt"
)
// NewListIndex loads a ListIndex
func NewListIndex(parent *bolt.Bucket, indexName []byte) (*ListIndex, error) {
var err error
b := parent.Bucket(indexName)
if b == nil {
if !parent.Writable() {
return nil, ErrNotFound
}
b, err = parent.CreateBucket(indexName)
if err != nil {
return nil, err
}
}
ids, err := NewUniqueIndex(b, []byte("storm__ids"))
if err != nil {
return nil, err
}
return &ListIndex{
IndexBucket: b,
Parent: parent,
IDs: ids,
}, nil
}
// ListIndex is an index that references values and the corresponding IDs.
type ListIndex struct {
Parent *bolt.Bucket
IndexBucket *bolt.Bucket
IDs *UniqueIndex
}
// Add a value to the list index
func (idx *ListIndex) Add(newValue []byte, targetID []byte) error {
if newValue == nil || len(newValue) == 0 {
return ErrNilParam
}
if targetID == nil || len(targetID) == 0 {
return ErrNilParam
}
key := idx.IDs.Get(targetID)
if key != nil {
err := idx.IndexBucket.Delete(key)
if err != nil {
return err
}
err = idx.IDs.Remove(targetID)
if err != nil {
return err
}
key = key[:0]
}
key = append(key, newValue...)
key = append(key, '_')
key = append(key, '_')
key = append(key, targetID...)
err := idx.IDs.Add(targetID, key)
if err != nil {
return err
}
return idx.IndexBucket.Put(key, targetID)
}
// Remove a value from the unique index
func (idx *ListIndex) Remove(value []byte) error {
var err error
var keys [][]byte
c := idx.IndexBucket.Cursor()
prefix := generatePrefix(value)
for k, _ := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, _ = c.Next() {
keys = append(keys, k)
}
for _, k := range keys {
err = idx.IndexBucket.Delete(k)
if err != nil {
return err
}
}
return idx.IDs.RemoveID(value)
}
// RemoveID removes an ID from the list index
func (idx *ListIndex) RemoveID(targetID []byte) error {
value := idx.IDs.Get(targetID)
if value == nil {
return nil
}
err := idx.IndexBucket.Delete(value)
if err != nil {
return err
}
return idx.IDs.Remove(targetID)
}
// Get the first ID corresponding to the given value
func (idx *ListIndex) Get(value []byte) []byte {
c := idx.IndexBucket.Cursor()
prefix := generatePrefix(value)
for k, id := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, id = c.Next() {
return id
}
return nil
}
// All the IDs corresponding to the given value
func (idx *ListIndex) All(value []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := idx.IndexBucket.Cursor()
cur := internal.Cursor{C: c, Reverse: opts != nil && opts.Reverse}
prefix := generatePrefix(value)
k, id := c.Seek(prefix)
if cur.Reverse {
var count int
kc := k
idc := id
for ; kc != nil && bytes.HasPrefix(kc, prefix); kc, idc = c.Next() {
count++
k, id = kc, idc
}
if kc != nil {
k, id = c.Prev()
}
list = make([][]byte, 0, count)
}
for ; bytes.HasPrefix(k, prefix); k, id = cur.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// AllRecords returns all the IDs of this index
func (idx *ListIndex) AllRecords(opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.Cursor{C: idx.IndexBucket.Cursor(), Reverse: opts != nil && opts.Reverse}
for k, id := c.First(); k != nil; k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// Range returns the ids corresponding to the given range of values
func (idx *ListIndex) Range(min []byte, max []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.RangeCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Min: min,
Max: max,
CompareFn: func(val, limit []byte) int {
pos := bytes.LastIndex(val, []byte("__"))
return bytes.Compare(val[:pos], limit)
},
}
for k, id := c.First(); c.Continue(k); k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
// Prefix returns the ids whose values have the given prefix.
func (idx *ListIndex) Prefix(prefix []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.PrefixCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Prefix: prefix,
}
for k, id := c.First(); k != nil && c.Continue(k); k, id = c.Next() {
if id == nil || bytes.Equal(k, []byte("storm__ids")) {
continue
}
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, id)
}
return list, nil
}
func generatePrefix(value []byte) []byte {
prefix := make([]byte, len(value)+2)
var i int
for i = range value {
prefix[i] = value[i]
}
prefix[i+1] = '_'
prefix[i+2] = '_'
return prefix
}

15
vendor/github.com/asdine/storm/v3/index/options.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
package index
// NewOptions creates initialized Options
func NewOptions() *Options {
return &Options{
Limit: -1,
}
}
// Options are used to customize queries
type Options struct {
Limit int
Skip int
Reverse bool
}

183
vendor/github.com/asdine/storm/v3/index/unique.go generated vendored Normal file
View File

@@ -0,0 +1,183 @@
package index
import (
"bytes"
"github.com/asdine/storm/v3/internal"
bolt "go.etcd.io/bbolt"
)
// NewUniqueIndex loads a UniqueIndex
func NewUniqueIndex(parent *bolt.Bucket, indexName []byte) (*UniqueIndex, error) {
var err error
b := parent.Bucket(indexName)
if b == nil {
if !parent.Writable() {
return nil, ErrNotFound
}
b, err = parent.CreateBucket(indexName)
if err != nil {
return nil, err
}
}
return &UniqueIndex{
IndexBucket: b,
Parent: parent,
}, nil
}
// UniqueIndex is an index that references unique values and the corresponding ID.
type UniqueIndex struct {
Parent *bolt.Bucket
IndexBucket *bolt.Bucket
}
// Add a value to the unique index
func (idx *UniqueIndex) Add(value []byte, targetID []byte) error {
if value == nil || len(value) == 0 {
return ErrNilParam
}
if targetID == nil || len(targetID) == 0 {
return ErrNilParam
}
exists := idx.IndexBucket.Get(value)
if exists != nil {
if bytes.Equal(exists, targetID) {
return nil
}
return ErrAlreadyExists
}
return idx.IndexBucket.Put(value, targetID)
}
// Remove a value from the unique index
func (idx *UniqueIndex) Remove(value []byte) error {
return idx.IndexBucket.Delete(value)
}
// RemoveID removes an ID from the unique index
func (idx *UniqueIndex) RemoveID(id []byte) error {
c := idx.IndexBucket.Cursor()
for val, ident := c.First(); val != nil; val, ident = c.Next() {
if bytes.Equal(ident, id) {
return idx.Remove(val)
}
}
return nil
}
// Get the id corresponding to the given value
func (idx *UniqueIndex) Get(value []byte) []byte {
return idx.IndexBucket.Get(value)
}
// All returns all the ids corresponding to the given value
func (idx *UniqueIndex) All(value []byte, opts *Options) ([][]byte, error) {
id := idx.IndexBucket.Get(value)
if id != nil {
return [][]byte{id}, nil
}
return nil, nil
}
// AllRecords returns all the IDs of this index
func (idx *UniqueIndex) AllRecords(opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.Cursor{C: idx.IndexBucket.Cursor(), Reverse: opts != nil && opts.Reverse}
for val, ident := c.First(); val != nil; val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// Range returns the ids corresponding to the given range of values
func (idx *UniqueIndex) Range(min []byte, max []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.RangeCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Min: min,
Max: max,
CompareFn: func(val, limit []byte) int {
return bytes.Compare(val, limit)
},
}
for val, ident := c.First(); val != nil && c.Continue(val); val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// Prefix returns the ids whose values have the given prefix.
func (idx *UniqueIndex) Prefix(prefix []byte, opts *Options) ([][]byte, error) {
var list [][]byte
c := internal.PrefixCursor{
C: idx.IndexBucket.Cursor(),
Reverse: opts != nil && opts.Reverse,
Prefix: prefix,
}
for val, ident := c.First(); val != nil && c.Continue(val); val, ident = c.Next() {
if opts != nil && opts.Skip > 0 {
opts.Skip--
continue
}
if opts != nil && opts.Limit == 0 {
break
}
if opts != nil && opts.Limit > 0 {
opts.Limit--
}
list = append(list, ident)
}
return list, nil
}
// first returns the first ID of this index
func (idx *UniqueIndex) first() []byte {
c := idx.IndexBucket.Cursor()
for val, ident := c.First(); val != nil; val, ident = c.Next() {
return ident
}
return nil
}

121
vendor/github.com/asdine/storm/v3/internal/boltdb.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package internal
import (
"bytes"
bolt "go.etcd.io/bbolt"
)
// Cursor that can be reversed
type Cursor struct {
C *bolt.Cursor
Reverse bool
}
// First element
func (c *Cursor) First() ([]byte, []byte) {
if c.Reverse {
return c.C.Last()
}
return c.C.First()
}
// Next element
func (c *Cursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// RangeCursor that can be reversed
type RangeCursor struct {
C *bolt.Cursor
Reverse bool
Min []byte
Max []byte
CompareFn func([]byte, []byte) int
}
// First element
func (c *RangeCursor) First() ([]byte, []byte) {
if c.Reverse {
k, v := c.C.Seek(c.Max)
// If Seek doesn't find a key it goes to the next.
// If so, we need to get the previous one to avoid
// including bigger values. #218
if !bytes.HasPrefix(k, c.Max) && k != nil {
k, v = c.C.Prev()
}
return k, v
}
return c.C.Seek(c.Min)
}
// Next element
func (c *RangeCursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// Continue tells if the loop needs to continue
func (c *RangeCursor) Continue(val []byte) bool {
if c.Reverse {
return val != nil && c.CompareFn(val, c.Min) >= 0
}
return val != nil && c.CompareFn(val, c.Max) <= 0
}
// PrefixCursor that can be reversed
type PrefixCursor struct {
C *bolt.Cursor
Reverse bool
Prefix []byte
}
// First element
func (c *PrefixCursor) First() ([]byte, []byte) {
var k, v []byte
for k, v = c.C.First(); k != nil && !bytes.HasPrefix(k, c.Prefix); k, v = c.C.Next() {
}
if k == nil {
return nil, nil
}
if c.Reverse {
kc, vc := k, v
for ; kc != nil && bytes.HasPrefix(kc, c.Prefix); kc, vc = c.C.Next() {
k, v = kc, vc
}
if kc != nil {
k, v = c.C.Prev()
}
}
return k, v
}
// Next element
func (c *PrefixCursor) Next() ([]byte, []byte) {
if c.Reverse {
return c.C.Prev()
}
return c.C.Next()
}
// Continue tells if the loop needs to continue
func (c *PrefixCursor) Continue(val []byte) bool {
return val != nil && bytes.HasPrefix(val, c.Prefix)
}

170
vendor/github.com/asdine/storm/v3/kv.go generated vendored Normal file
View File

@@ -0,0 +1,170 @@
package storm
import (
"reflect"
bolt "go.etcd.io/bbolt"
)
// KeyValueStore can store and fetch values by key
type KeyValueStore interface {
// Get a value from a bucket
Get(bucketName string, key interface{}, to interface{}) error
// Set a key/value pair into a bucket
Set(bucketName string, key interface{}, value interface{}) error
// Delete deletes a key from a bucket
Delete(bucketName string, key interface{}) error
// GetBytes gets a raw value from a bucket.
GetBytes(bucketName string, key interface{}) ([]byte, error)
// SetBytes sets a raw value into a bucket.
SetBytes(bucketName string, key interface{}, value []byte) error
// KeyExists reports the presence of a key in a bucket.
KeyExists(bucketName string, key interface{}) (bool, error)
}
// GetBytes gets a raw value from a bucket.
func (n *node) GetBytes(bucketName string, key interface{}) ([]byte, error) {
id, err := toBytes(key, n.codec)
if err != nil {
return nil, err
}
var val []byte
return val, n.readTx(func(tx *bolt.Tx) error {
raw, err := n.getBytes(tx, bucketName, id)
if err != nil {
return err
}
val = make([]byte, len(raw))
copy(val, raw)
return nil
})
}
// GetBytes gets a raw value from a bucket.
func (n *node) getBytes(tx *bolt.Tx, bucketName string, id []byte) ([]byte, error) {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return nil, ErrNotFound
}
raw := bucket.Get(id)
if raw == nil {
return nil, ErrNotFound
}
return raw, nil
}
// SetBytes sets a raw value into a bucket.
func (n *node) SetBytes(bucketName string, key interface{}, value []byte) error {
if key == nil {
return ErrNilParam
}
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.setBytes(tx, bucketName, id, value)
})
}
func (n *node) setBytes(tx *bolt.Tx, bucketName string, id, data []byte) error {
bucket, err := n.CreateBucketIfNotExists(tx, bucketName)
if err != nil {
return err
}
// save node configuration in the bucket
_, err = newMeta(bucket, n)
if err != nil {
return err
}
return bucket.Put(id, data)
}
// Get a value from a bucket
func (n *node) Get(bucketName string, key interface{}, to interface{}) error {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr {
return ErrPtrNeeded
}
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readTx(func(tx *bolt.Tx) error {
raw, err := n.getBytes(tx, bucketName, id)
if err != nil {
return err
}
return n.codec.Unmarshal(raw, to)
})
}
// Set a key/value pair into a bucket
func (n *node) Set(bucketName string, key interface{}, value interface{}) error {
var data []byte
var err error
if value != nil {
data, err = n.codec.Marshal(value)
if err != nil {
return err
}
}
return n.SetBytes(bucketName, key, data)
}
// Delete deletes a key from a bucket
func (n *node) Delete(bucketName string, key interface{}) error {
id, err := toBytes(key, n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.delete(tx, bucketName, id)
})
}
func (n *node) delete(tx *bolt.Tx, bucketName string, id []byte) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
return bucket.Delete(id)
}
// KeyExists reports the presence of a key in a bucket.
func (n *node) KeyExists(bucketName string, key interface{}) (bool, error) {
id, err := toBytes(key, n.codec)
if err != nil {
return false, err
}
var exists bool
return exists, n.readTx(func(tx *bolt.Tx) error {
bucket := n.GetBucket(tx, bucketName)
if bucket == nil {
return ErrNotFound
}
v := bucket.Get(id)
if v != nil {
exists = true
}
return nil
})
}

69
vendor/github.com/asdine/storm/v3/metadata.go generated vendored Normal file
View File

@@ -0,0 +1,69 @@
package storm
import (
"reflect"
bolt "go.etcd.io/bbolt"
)
const (
metaCodec = "codec"
)
func newMeta(b *bolt.Bucket, n Node) (*meta, error) {
m := b.Bucket([]byte(metadataBucket))
if m != nil {
name := m.Get([]byte(metaCodec))
if string(name) != n.Codec().Name() {
return nil, ErrDifferentCodec
}
return &meta{
node: n,
bucket: m,
}, nil
}
m, err := b.CreateBucket([]byte(metadataBucket))
if err != nil {
return nil, err
}
m.Put([]byte(metaCodec), []byte(n.Codec().Name()))
return &meta{
node: n,
bucket: m,
}, nil
}
type meta struct {
node Node
bucket *bolt.Bucket
}
func (m *meta) increment(field *fieldConfig) error {
var err error
counter := field.IncrementStart
raw := m.bucket.Get([]byte(field.Name + "counter"))
if raw != nil {
counter, err = numberfromb(raw)
if err != nil {
return err
}
counter++
}
raw, err = numbertob(counter)
if err != nil {
return err
}
err = m.bucket.Put([]byte(field.Name+"counter"), raw)
if err != nil {
return err
}
field.Value.Set(reflect.ValueOf(counter).Convert(field.Value.Type()))
field.IsZero = false
return nil
}

126
vendor/github.com/asdine/storm/v3/node.go generated vendored Normal file
View File

@@ -0,0 +1,126 @@
package storm
import (
"github.com/asdine/storm/v3/codec"
bolt "go.etcd.io/bbolt"
)
// A Node in Storm represents the API to a BoltDB bucket.
type Node interface {
Tx
TypeStore
KeyValueStore
BucketScanner
// From returns a new Storm node with a new bucket root below the current.
// All DB operations on the new node will be executed relative to this bucket.
From(addend ...string) Node
// Bucket returns the bucket name as a slice from the root.
// In the normal, simple case this will be empty.
Bucket() []string
// GetBucket returns the given bucket below the current node.
GetBucket(tx *bolt.Tx, children ...string) *bolt.Bucket
// CreateBucketIfNotExists creates the bucket below the current node if it doesn't
// already exist.
CreateBucketIfNotExists(tx *bolt.Tx, bucket string) (*bolt.Bucket, error)
// WithTransaction returns a New Storm node that will use the given transaction.
WithTransaction(tx *bolt.Tx) Node
// Begin starts a new transaction.
Begin(writable bool) (Node, error)
// Codec used by this instance of Storm
Codec() codec.MarshalUnmarshaler
// WithCodec returns a New Storm Node that will use the given Codec.
WithCodec(codec codec.MarshalUnmarshaler) Node
// WithBatch returns a new Storm Node with the batch mode enabled.
WithBatch(enabled bool) Node
}
// A Node in Storm represents the API to a BoltDB bucket.
type node struct {
s *DB
// The root bucket. In the normal, simple case this will be empty.
rootBucket []string
// Transaction object. Nil if not in transaction
tx *bolt.Tx
// Codec of this node
codec codec.MarshalUnmarshaler
// Enable batch mode for read-write transaction, instead of update mode
batchMode bool
}
// From returns a new Storm Node with a new bucket root below the current.
// All DB operations on the new node will be executed relative to this bucket.
func (n node) From(addend ...string) Node {
n.rootBucket = append(n.rootBucket, addend...)
return &n
}
// WithTransaction returns a new Storm Node that will use the given transaction.
func (n node) WithTransaction(tx *bolt.Tx) Node {
n.tx = tx
return &n
}
// WithCodec returns a new Storm Node that will use the given Codec.
func (n node) WithCodec(codec codec.MarshalUnmarshaler) Node {
n.codec = codec
return &n
}
// WithBatch returns a new Storm Node with the batch mode enabled.
func (n node) WithBatch(enabled bool) Node {
n.batchMode = enabled
return &n
}
// Bucket returns the bucket name as a slice from the root.
// In the normal, simple case this will be empty.
func (n *node) Bucket() []string {
return n.rootBucket
}
// Codec returns the EncodeDecoder used by this instance of Storm
func (n *node) Codec() codec.MarshalUnmarshaler {
return n.codec
}
// Detects if already in transaction or runs a read write transaction.
// Uses batch mode if enabled.
func (n *node) readWriteTx(fn func(tx *bolt.Tx) error) error {
if n.tx != nil {
return fn(n.tx)
}
if n.batchMode {
return n.s.Bolt.Batch(func(tx *bolt.Tx) error {
return fn(tx)
})
}
return n.s.Bolt.Update(func(tx *bolt.Tx) error {
return fn(tx)
})
}
// Detects if already in transaction or runs a read transaction.
func (n *node) readTx(fn func(tx *bolt.Tx) error) error {
if n.tx != nil {
return fn(n.tx)
}
return n.s.Bolt.View(func(tx *bolt.Tx) error {
return fn(tx)
})
}

97
vendor/github.com/asdine/storm/v3/options.go generated vendored Normal file
View File

@@ -0,0 +1,97 @@
package storm
import (
"os"
"github.com/asdine/storm/v3/codec"
"github.com/asdine/storm/v3/index"
bolt "go.etcd.io/bbolt"
)
// BoltOptions used to pass options to BoltDB.
func BoltOptions(mode os.FileMode, options *bolt.Options) func(*Options) error {
return func(opts *Options) error {
opts.boltMode = mode
opts.boltOptions = options
return nil
}
}
// Codec used to set a custom encoder and decoder. The default is JSON.
func Codec(c codec.MarshalUnmarshaler) func(*Options) error {
return func(opts *Options) error {
opts.codec = c
return nil
}
}
// Batch enables the use of batch instead of update for read-write transactions.
func Batch() func(*Options) error {
return func(opts *Options) error {
opts.batchMode = true
return nil
}
}
// Root used to set the root bucket. See also the From method.
func Root(root ...string) func(*Options) error {
return func(opts *Options) error {
opts.rootBucket = root
return nil
}
}
// UseDB allows Storm to use an existing open Bolt.DB.
// Warning: storm.DB.Close() will close the bolt.DB instance.
func UseDB(b *bolt.DB) func(*Options) error {
return func(opts *Options) error {
opts.path = b.Path()
opts.bolt = b
return nil
}
}
// Limit sets the maximum number of records to return
func Limit(limit int) func(*index.Options) {
return func(opts *index.Options) {
opts.Limit = limit
}
}
// Skip sets the number of records to skip
func Skip(offset int) func(*index.Options) {
return func(opts *index.Options) {
opts.Skip = offset
}
}
// Reverse will return the results in descending order
func Reverse() func(*index.Options) {
return func(opts *index.Options) {
opts.Reverse = true
}
}
// Options are used to customize the way Storm opens a database.
type Options struct {
// Handles encoding and decoding of objects
codec codec.MarshalUnmarshaler
// Bolt file mode
boltMode os.FileMode
// Bolt options
boltOptions *bolt.Options
// Enable batch mode for read-write transaction, instead of update mode
batchMode bool
// The root bucket name
rootBucket []string
// Path of the database file
path string
// Bolt is still easily accessible
bolt *bolt.DB
}

122
vendor/github.com/asdine/storm/v3/q/compare.go generated vendored Normal file
View File

@@ -0,0 +1,122 @@
package q
import (
"go/constant"
"go/token"
"reflect"
"strconv"
)
func compare(a, b interface{}, tok token.Token) bool {
vala := reflect.ValueOf(a)
valb := reflect.ValueOf(b)
ak := vala.Kind()
bk := valb.Kind()
switch {
// comparing nil values
case (ak == reflect.Ptr || ak == reflect.Slice || ak == reflect.Interface || ak == reflect.Invalid) &&
(bk == reflect.Ptr || ak == reflect.Slice || bk == reflect.Interface || bk == reflect.Invalid) &&
(!vala.IsValid() || vala.IsNil()) && (!valb.IsValid() || valb.IsNil()):
return true
case ak >= reflect.Int && ak <= reflect.Int64:
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeInt64(vala.Int()), tok, constant.MakeInt64(valb.Int()))
}
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeInt64(vala.Int()), tok, constant.MakeInt64(int64(valb.Uint())))
}
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(float64(vala.Int())), tok, constant.MakeFloat64(valb.Float()))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(float64(vala.Int())), tok, constant.MakeFloat64(bla))
}
case ak >= reflect.Uint && ak <= reflect.Uint64:
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeUint64(vala.Uint()), tok, constant.MakeUint64(valb.Uint()))
}
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeUint64(vala.Uint()), tok, constant.MakeUint64(uint64(valb.Int())))
}
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(float64(vala.Uint())), tok, constant.MakeFloat64(valb.Float()))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(float64(vala.Uint())), tok, constant.MakeFloat64(bla))
}
case ak == reflect.Float32 || ak == reflect.Float64:
if bk == reflect.Float32 || bk == reflect.Float64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(valb.Float()))
}
if bk >= reflect.Int && bk <= reflect.Int64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(float64(valb.Int())))
}
if bk >= reflect.Uint && bk <= reflect.Uint64 {
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(float64(valb.Uint())))
}
if bk == reflect.String {
bla, err := strconv.ParseFloat(valb.String(), 64)
if err != nil {
return false
}
return constant.Compare(constant.MakeFloat64(vala.Float()), tok, constant.MakeFloat64(bla))
}
case ak == reflect.String:
if bk == reflect.String {
return constant.Compare(constant.MakeString(vala.String()), tok, constant.MakeString(valb.String()))
}
}
typea, typeb := reflect.TypeOf(a), reflect.TypeOf(b)
if typea != nil && (typea.String() == "time.Time" || typea.String() == "*time.Time") &&
typeb != nil && (typeb.String() == "time.Time" || typeb.String() == "*time.Time") {
if typea.String() == "*time.Time" && vala.IsNil() {
return true
}
if typeb.String() == "*time.Time" {
if valb.IsNil() {
return true
}
valb = valb.Elem()
}
var x, y int64
x = 1
if vala.MethodByName("Equal").Call([]reflect.Value{valb})[0].Bool() {
y = 1
} else if vala.MethodByName("Before").Call([]reflect.Value{valb})[0].Bool() {
y = 2
}
return constant.Compare(constant.MakeInt64(x), tok, constant.MakeInt64(y))
}
if tok == token.EQL {
return reflect.DeepEqual(a, b)
}
return false
}

67
vendor/github.com/asdine/storm/v3/q/fieldmatcher.go generated vendored Normal file
View File

@@ -0,0 +1,67 @@
package q
import (
"errors"
"go/token"
"reflect"
)
// ErrUnknownField is returned when an unknown field is passed.
var ErrUnknownField = errors.New("unknown field")
type fieldMatcherDelegate struct {
FieldMatcher
Field string
}
// NewFieldMatcher creates a Matcher for a given field.
func NewFieldMatcher(field string, fm FieldMatcher) Matcher {
return fieldMatcherDelegate{Field: field, FieldMatcher: fm}
}
// FieldMatcher can be used in NewFieldMatcher as a simple way to create the
// most common Matcher: A Matcher that evaluates one field's value.
// For more complex scenarios, implement the Matcher interface directly.
type FieldMatcher interface {
MatchField(v interface{}) (bool, error)
}
func (r fieldMatcherDelegate) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return r.MatchValue(&v)
}
func (r fieldMatcherDelegate) MatchValue(v *reflect.Value) (bool, error) {
field := v.FieldByName(r.Field)
if !field.IsValid() {
return false, ErrUnknownField
}
return r.MatchField(field.Interface())
}
// NewField2FieldMatcher creates a Matcher for a given field1 and field2.
func NewField2FieldMatcher(field1, field2 string, tok token.Token) Matcher {
return field2fieldMatcherDelegate{Field1: field1, Field2: field2, Tok: tok}
}
type field2fieldMatcherDelegate struct {
Field1, Field2 string
Tok token.Token
}
func (r field2fieldMatcherDelegate) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return r.MatchValue(&v)
}
func (r field2fieldMatcherDelegate) MatchValue(v *reflect.Value) (bool, error) {
field1 := v.FieldByName(r.Field1)
if !field1.IsValid() {
return false, ErrUnknownField
}
field2 := v.FieldByName(r.Field2)
if !field2.IsValid() {
return false, ErrUnknownField
}
return compare(field1.Interface(), field2.Interface(), r.Tok), nil
}

51
vendor/github.com/asdine/storm/v3/q/regexp.go generated vendored Normal file
View File

@@ -0,0 +1,51 @@
package q
import (
"fmt"
"regexp"
"sync"
)
// Re creates a regexp matcher. It checks if the given field matches the given regexp.
// Note that this only supports fields of type string or []byte.
func Re(field string, re string) Matcher {
regexpCache.RLock()
if r, ok := regexpCache.m[re]; ok {
regexpCache.RUnlock()
return NewFieldMatcher(field, &regexpMatcher{r: r})
}
regexpCache.RUnlock()
regexpCache.Lock()
r, err := regexp.Compile(re)
if err == nil {
regexpCache.m[re] = r
}
regexpCache.Unlock()
return NewFieldMatcher(field, &regexpMatcher{r: r, err: err})
}
var regexpCache = struct {
sync.RWMutex
m map[string]*regexp.Regexp
}{m: make(map[string]*regexp.Regexp)}
type regexpMatcher struct {
r *regexp.Regexp
err error
}
func (r *regexpMatcher) MatchField(v interface{}) (bool, error) {
if r.err != nil {
return false, r.err
}
switch fieldValue := v.(type) {
case string:
return r.r.MatchString(fieldValue), nil
case []byte:
return r.r.Match(fieldValue), nil
default:
return false, fmt.Errorf("Only string and []byte supported for regexp matcher, got %T", fieldValue)
}
}

247
vendor/github.com/asdine/storm/v3/q/tree.go generated vendored Normal file
View File

@@ -0,0 +1,247 @@
// Package q contains a list of Matchers used to compare struct fields with values
package q
import (
"go/token"
"reflect"
)
// A Matcher is used to test against a record to see if it matches.
type Matcher interface {
// Match is used to test the criteria against a structure.
Match(interface{}) (bool, error)
}
// A ValueMatcher is used to test against a reflect.Value.
type ValueMatcher interface {
// MatchValue tests if the given reflect.Value matches.
// It is useful when the reflect.Value of an object already exists.
MatchValue(*reflect.Value) (bool, error)
}
type cmp struct {
value interface{}
token token.Token
}
func (c *cmp) MatchField(v interface{}) (bool, error) {
return compare(v, c.value, c.token), nil
}
type trueMatcher struct{}
func (*trueMatcher) Match(i interface{}) (bool, error) {
return true, nil
}
func (*trueMatcher) MatchValue(v *reflect.Value) (bool, error) {
return true, nil
}
type or struct {
children []Matcher
}
func (c *or) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return c.MatchValue(&v)
}
func (c *or) MatchValue(v *reflect.Value) (bool, error) {
for _, matcher := range c.children {
if vm, ok := matcher.(ValueMatcher); ok {
ok, err := vm.MatchValue(v)
if err != nil {
return false, err
}
if ok {
return true, nil
}
continue
}
ok, err := matcher.Match(v.Interface())
if err != nil {
return false, err
}
if ok {
return true, nil
}
}
return false, nil
}
type and struct {
children []Matcher
}
func (c *and) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return c.MatchValue(&v)
}
func (c *and) MatchValue(v *reflect.Value) (bool, error) {
for _, matcher := range c.children {
if vm, ok := matcher.(ValueMatcher); ok {
ok, err := vm.MatchValue(v)
if err != nil {
return false, err
}
if !ok {
return false, nil
}
continue
}
ok, err := matcher.Match(v.Interface())
if err != nil {
return false, err
}
if !ok {
return false, nil
}
}
return true, nil
}
type strictEq struct {
field string
value interface{}
}
func (s *strictEq) MatchField(v interface{}) (bool, error) {
return reflect.DeepEqual(v, s.value), nil
}
type in struct {
list interface{}
}
func (i *in) MatchField(v interface{}) (bool, error) {
ref := reflect.ValueOf(i.list)
if ref.Kind() != reflect.Slice {
return false, nil
}
c := cmp{
token: token.EQL,
}
for i := 0; i < ref.Len(); i++ {
c.value = ref.Index(i).Interface()
ok, err := c.MatchField(v)
if err != nil {
return false, err
}
if ok {
return true, nil
}
}
return false, nil
}
type not struct {
children []Matcher
}
func (n *not) Match(i interface{}) (bool, error) {
v := reflect.Indirect(reflect.ValueOf(i))
return n.MatchValue(&v)
}
func (n *not) MatchValue(v *reflect.Value) (bool, error) {
var err error
for _, matcher := range n.children {
vm, ok := matcher.(ValueMatcher)
if ok {
ok, err = vm.MatchValue(v)
} else {
ok, err = matcher.Match(v.Interface())
}
if err != nil {
return false, err
}
if ok {
return false, nil
}
}
return true, nil
}
// Eq matcher, checks if the given field is equal to the given value
func Eq(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.EQL})
}
// EqF matcher, checks if the given field is equal to the given field
func EqF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.EQL)
}
// StrictEq matcher, checks if the given field is deeply equal to the given value
func StrictEq(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &strictEq{value: v})
}
// Gt matcher, checks if the given field is greater than the given value
func Gt(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.GTR})
}
// GtF matcher, checks if the given field is greater than the given field
func GtF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.GTR)
}
// Gte matcher, checks if the given field is greater than or equal to the given value
func Gte(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.GEQ})
}
// GteF matcher, checks if the given field is greater than or equal to the given field
func GteF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.GEQ)
}
// Lt matcher, checks if the given field is lesser than the given value
func Lt(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.LSS})
}
// LtF matcher, checks if the given field is lesser than the given field
func LtF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.LSS)
}
// Lte matcher, checks if the given field is lesser than or equal to the given value
func Lte(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &cmp{value: v, token: token.LEQ})
}
// LteF matcher, checks if the given field is lesser than or equal to the given field
func LteF(field1, field2 string) Matcher {
return NewField2FieldMatcher(field1, field2, token.LEQ)
}
// In matcher, checks if the given field matches one of the value of the given slice.
// v must be a slice.
func In(field string, v interface{}) Matcher {
return NewFieldMatcher(field, &in{list: v})
}
// True matcher, always returns true
func True() Matcher { return &trueMatcher{} }
// Or matcher, checks if at least one of the given matchers matches the record
func Or(matchers ...Matcher) Matcher { return &or{children: matchers} }
// And matcher, checks if all of the given matchers matches the record
func And(matchers ...Matcher) Matcher { return &and{children: matchers} }
// Not matcher, checks if all of the given matchers return false
func Not(matchers ...Matcher) Matcher { return &not{children: matchers} }

219
vendor/github.com/asdine/storm/v3/query.go generated vendored Normal file
View File

@@ -0,0 +1,219 @@
package storm
import (
"github.com/asdine/storm/v3/internal"
"github.com/asdine/storm/v3/q"
bolt "go.etcd.io/bbolt"
)
// Select a list of records that match a list of matchers. Doesn't use indexes.
func (n *node) Select(matchers ...q.Matcher) Query {
tree := q.And(matchers...)
return newQuery(n, tree)
}
// Query is the low level query engine used by Storm. It allows to operate searches through an entire bucket.
type Query interface {
// Skip matching records by the given number
Skip(int) Query
// Limit the results by the given number
Limit(int) Query
// Order by the given fields, in descending precedence, left-to-right.
OrderBy(...string) Query
// Reverse the order of the results
Reverse() Query
// Bucket specifies the bucket name
Bucket(string) Query
// Find a list of matching records
Find(interface{}) error
// First gets the first matching record
First(interface{}) error
// Delete all matching records
Delete(interface{}) error
// Count all the matching records
Count(interface{}) (int, error)
// Returns all the records without decoding them
Raw() ([][]byte, error)
// Execute the given function for each raw element
RawEach(func([]byte, []byte) error) error
// Execute the given function for each element
Each(interface{}, func(interface{}) error) error
}
func newQuery(n *node, tree q.Matcher) *query {
return &query{
skip: 0,
limit: -1,
node: n,
tree: tree,
}
}
type query struct {
limit int
skip int
reverse bool
tree q.Matcher
node *node
bucket string
orderBy []string
}
func (q *query) Skip(nb int) Query {
q.skip = nb
return q
}
func (q *query) Limit(nb int) Query {
q.limit = nb
return q
}
func (q *query) OrderBy(field ...string) Query {
q.orderBy = field
return q
}
func (q *query) Reverse() Query {
q.reverse = true
return q
}
func (q *query) Bucket(bucketName string) Query {
q.bucket = bucketName
return q
}
func (q *query) Find(to interface{}) error {
sink, err := newListSink(q.node, to)
if err != nil {
return err
}
return q.runQuery(sink)
}
func (q *query) First(to interface{}) error {
sink, err := newFirstSink(q.node, to)
if err != nil {
return err
}
q.limit = 1
return q.runQuery(sink)
}
func (q *query) Delete(kind interface{}) error {
sink, err := newDeleteSink(q.node, kind)
if err != nil {
return err
}
return q.runQuery(sink)
}
func (q *query) Count(kind interface{}) (int, error) {
sink, err := newCountSink(q.node, kind)
if err != nil {
return 0, err
}
err = q.runQuery(sink)
if err != nil {
return 0, err
}
return sink.counter, nil
}
func (q *query) Raw() ([][]byte, error) {
sink := newRawSink()
err := q.runQuery(sink)
if err != nil {
return nil, err
}
return sink.results, nil
}
func (q *query) RawEach(fn func([]byte, []byte) error) error {
sink := newRawSink()
sink.execFn = fn
return q.runQuery(sink)
}
func (q *query) Each(kind interface{}, fn func(interface{}) error) error {
sink, err := newEachSink(kind)
if err != nil {
return err
}
sink.execFn = fn
return q.runQuery(sink)
}
func (q *query) runQuery(sink sink) error {
if q.node.tx != nil {
return q.query(q.node.tx, sink)
}
if sink.readOnly() {
return q.node.s.Bolt.View(func(tx *bolt.Tx) error {
return q.query(tx, sink)
})
}
return q.node.s.Bolt.Update(func(tx *bolt.Tx) error {
return q.query(tx, sink)
})
}
func (q *query) query(tx *bolt.Tx, sink sink) error {
bucketName := q.bucket
if bucketName == "" {
bucketName = sink.bucketName()
}
bucket := q.node.GetBucket(tx, bucketName)
if q.limit == 0 {
return sink.flush()
}
sorter := newSorter(q.node, sink)
sorter.orderBy = q.orderBy
sorter.reverse = q.reverse
sorter.skip = q.skip
sorter.limit = q.limit
if bucket != nil {
c := internal.Cursor{C: bucket.Cursor(), Reverse: q.reverse}
for k, v := c.First(); k != nil; k, v = c.Next() {
if v == nil {
continue
}
stop, err := sorter.filter(q.tree, bucket, k, v)
if err != nil {
return err
}
if stop {
break
}
}
}
return sorter.flush()
}

105
vendor/github.com/asdine/storm/v3/scan.go generated vendored Normal file
View File

@@ -0,0 +1,105 @@
package storm
import (
"bytes"
bolt "go.etcd.io/bbolt"
)
// A BucketScanner scans a Node for a list of buckets
type BucketScanner interface {
// PrefixScan scans the root buckets for keys matching the given prefix.
PrefixScan(prefix string) []Node
// PrefixScan scans the buckets in this node for keys matching the given prefix.
RangeScan(min, max string) []Node
}
// PrefixScan scans the buckets in this node for keys matching the given prefix.
func (n *node) PrefixScan(prefix string) []Node {
if n.tx != nil {
return n.prefixScan(n.tx, prefix)
}
var nodes []Node
n.readTx(func(tx *bolt.Tx) error {
nodes = n.prefixScan(tx, prefix)
return nil
})
return nodes
}
func (n *node) prefixScan(tx *bolt.Tx, prefix string) []Node {
var (
prefixBytes = []byte(prefix)
nodes []Node
c = n.cursor(tx)
)
if c == nil {
return nil
}
for k, v := c.Seek(prefixBytes); k != nil && bytes.HasPrefix(k, prefixBytes); k, v = c.Next() {
if v != nil {
continue
}
nodes = append(nodes, n.From(string(k)))
}
return nodes
}
// RangeScan scans the buckets in this node over a range such as a sortable time range.
func (n *node) RangeScan(min, max string) []Node {
if n.tx != nil {
return n.rangeScan(n.tx, min, max)
}
var nodes []Node
n.readTx(func(tx *bolt.Tx) error {
nodes = n.rangeScan(tx, min, max)
return nil
})
return nodes
}
func (n *node) rangeScan(tx *bolt.Tx, min, max string) []Node {
var (
minBytes = []byte(min)
maxBytes = []byte(max)
nodes []Node
c = n.cursor(tx)
)
for k, v := c.Seek(minBytes); k != nil && bytes.Compare(k, maxBytes) <= 0; k, v = c.Next() {
if v != nil {
continue
}
nodes = append(nodes, n.From(string(k)))
}
return nodes
}
func (n *node) cursor(tx *bolt.Tx) *bolt.Cursor {
var c *bolt.Cursor
if len(n.rootBucket) > 0 {
b := n.GetBucket(tx)
if b == nil {
return nil
}
c = b.Cursor()
} else {
c = tx.Cursor()
}
return c
}

620
vendor/github.com/asdine/storm/v3/sink.go generated vendored Normal file
View File

@@ -0,0 +1,620 @@
package storm
import (
"reflect"
"sort"
"time"
"github.com/asdine/storm/v3/index"
"github.com/asdine/storm/v3/q"
bolt "go.etcd.io/bbolt"
)
type item struct {
value *reflect.Value
bucket *bolt.Bucket
k []byte
v []byte
}
func newSorter(n Node, snk sink) *sorter {
return &sorter{
node: n,
sink: snk,
skip: 0,
limit: -1,
list: make([]*item, 0),
err: make(chan error),
done: make(chan struct{}),
}
}
type sorter struct {
node Node
sink sink
list []*item
skip int
limit int
orderBy []string
reverse bool
err chan error
done chan struct{}
}
func (s *sorter) filter(tree q.Matcher, bucket *bolt.Bucket, k, v []byte) (bool, error) {
itm := &item{
bucket: bucket,
k: k,
v: v,
}
rsink, ok := s.sink.(reflectSink)
if !ok {
return s.add(itm)
}
newElem := rsink.elem()
if err := s.node.Codec().Unmarshal(v, newElem.Interface()); err != nil {
return false, err
}
itm.value = &newElem
if tree != nil {
ok, err := tree.Match(newElem.Interface())
if err != nil {
return false, err
}
if !ok {
return false, nil
}
}
if len(s.orderBy) == 0 {
return s.add(itm)
}
if _, ok := s.sink.(sliceSink); ok {
// add directly to sink, we'll apply skip/limits after sorting
return false, s.sink.add(itm)
}
s.list = append(s.list, itm)
return false, nil
}
func (s *sorter) add(itm *item) (stop bool, err error) {
if s.limit == 0 {
return true, nil
}
if s.skip > 0 {
s.skip--
return false, nil
}
if s.limit > 0 {
s.limit--
}
err = s.sink.add(itm)
return s.limit == 0, err
}
func (s *sorter) compareValue(left reflect.Value, right reflect.Value) int {
if !left.IsValid() || !right.IsValid() {
if left.IsValid() {
return 1
}
return -1
}
switch left.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
l, r := left.Int(), right.Int()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
l, r := left.Uint(), right.Uint()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.Float32, reflect.Float64:
l, r := left.Float(), right.Float()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.String:
l, r := left.String(), right.String()
if l < r {
return -1
}
if l > r {
return 1
}
case reflect.Struct:
if lt, lok := left.Interface().(time.Time); lok {
if rt, rok := right.Interface().(time.Time); rok {
if lok && rok {
if lt.Before(rt) {
return -1
} else {
return 1
}
}
}
}
default:
rawLeft, err := toBytes(left.Interface(), s.node.Codec())
if err != nil {
return -1
}
rawRight, err := toBytes(right.Interface(), s.node.Codec())
if err != nil {
return 1
}
l, r := string(rawLeft), string(rawRight)
if l < r {
return -1
}
if l > r {
return 1
}
}
return 0
}
func (s *sorter) less(leftElem reflect.Value, rightElem reflect.Value) bool {
for _, orderBy := range s.orderBy {
leftField := reflect.Indirect(leftElem).FieldByName(orderBy)
if !leftField.IsValid() {
s.err <- ErrNotFound
return false
}
rightField := reflect.Indirect(rightElem).FieldByName(orderBy)
if !rightField.IsValid() {
s.err <- ErrNotFound
return false
}
direction := 1
if s.reverse {
direction = -1
}
switch s.compareValue(leftField, rightField) * direction {
case -1:
return true
case 1:
return false
default:
continue
}
}
return false
}
func (s *sorter) flush() error {
if len(s.orderBy) == 0 {
return s.sink.flush()
}
go func() {
sort.Sort(s)
close(s.err)
}()
err := <-s.err
close(s.done)
if err != nil {
return err
}
if ssink, ok := s.sink.(sliceSink); ok {
if !ssink.slice().IsValid() {
return s.sink.flush()
}
if s.skip >= ssink.slice().Len() {
ssink.reset()
return s.sink.flush()
}
leftBound := s.skip
if leftBound < 0 {
leftBound = 0
}
limit := s.limit
if s.limit < 0 {
limit = 0
}
rightBound := leftBound + limit
if rightBound > ssink.slice().Len() || rightBound == leftBound {
rightBound = ssink.slice().Len()
}
ssink.setSlice(ssink.slice().Slice(leftBound, rightBound))
return s.sink.flush()
}
for _, itm := range s.list {
if itm == nil {
break
}
stop, err := s.add(itm)
if err != nil {
return err
}
if stop {
break
}
}
return s.sink.flush()
}
func (s *sorter) Len() int {
// skip if we encountered an earlier error
select {
case <-s.done:
return 0
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
return ssink.slice().Len()
}
return len(s.list)
}
func (s *sorter) Less(i, j int) bool {
// skip if we encountered an earlier error
select {
case <-s.done:
return false
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
return s.less(ssink.slice().Index(i), ssink.slice().Index(j))
}
return s.less(*s.list[i].value, *s.list[j].value)
}
type sink interface {
bucketName() string
flush() error
add(*item) error
readOnly() bool
}
type reflectSink interface {
elem() reflect.Value
}
type sliceSink interface {
slice() reflect.Value
setSlice(reflect.Value)
reset()
}
func newListSink(node Node, to interface{}) (*listSink, error) {
ref := reflect.ValueOf(to)
if ref.Kind() != reflect.Ptr || reflect.Indirect(ref).Kind() != reflect.Slice {
return nil, ErrSlicePtrNeeded
}
sliceType := reflect.Indirect(ref).Type()
elemType := sliceType.Elem()
if elemType.Kind() == reflect.Ptr {
elemType = elemType.Elem()
}
if elemType.Name() == "" {
return nil, ErrNoName
}
return &listSink{
node: node,
ref: ref,
isPtr: sliceType.Elem().Kind() == reflect.Ptr,
elemType: elemType,
name: elemType.Name(),
results: reflect.MakeSlice(reflect.Indirect(ref).Type(), 0, 0),
}, nil
}
type listSink struct {
node Node
ref reflect.Value
results reflect.Value
elemType reflect.Type
name string
isPtr bool
idx int
}
func (l *listSink) slice() reflect.Value {
return l.results
}
func (l *listSink) setSlice(s reflect.Value) {
l.results = s
}
func (l *listSink) reset() {
l.results = reflect.MakeSlice(reflect.Indirect(l.ref).Type(), 0, 0)
}
func (l *listSink) elem() reflect.Value {
if l.results.IsValid() && l.idx < l.results.Len() {
return l.results.Index(l.idx).Addr()
}
return reflect.New(l.elemType)
}
func (l *listSink) bucketName() string {
return l.name
}
func (l *listSink) add(i *item) error {
if l.idx == l.results.Len() {
if l.isPtr {
l.results = reflect.Append(l.results, *i.value)
} else {
l.results = reflect.Append(l.results, reflect.Indirect(*i.value))
}
}
l.idx++
return nil
}
func (l *listSink) flush() error {
if l.results.IsValid() && l.results.Len() > 0 {
reflect.Indirect(l.ref).Set(l.results)
return nil
}
return ErrNotFound
}
func (l *listSink) readOnly() bool {
return true
}
func newFirstSink(node Node, to interface{}) (*firstSink, error) {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &firstSink{
node: node,
ref: ref,
}, nil
}
type firstSink struct {
node Node
ref reflect.Value
found bool
}
func (f *firstSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(f.ref).Type())
}
func (f *firstSink) bucketName() string {
return reflect.Indirect(f.ref).Type().Name()
}
func (f *firstSink) add(i *item) error {
reflect.Indirect(f.ref).Set(i.value.Elem())
f.found = true
return nil
}
func (f *firstSink) flush() error {
if !f.found {
return ErrNotFound
}
return nil
}
func (f *firstSink) readOnly() bool {
return true
}
func newDeleteSink(node Node, kind interface{}) (*deleteSink, error) {
ref := reflect.ValueOf(kind)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &deleteSink{
node: node,
ref: ref,
}, nil
}
type deleteSink struct {
node Node
ref reflect.Value
removed int
}
func (d *deleteSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(d.ref).Type())
}
func (d *deleteSink) bucketName() string {
return reflect.Indirect(d.ref).Type().Name()
}
func (d *deleteSink) add(i *item) error {
info, err := extract(&d.ref)
if err != nil {
return err
}
for fieldName, fieldCfg := range info.Fields {
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(i.bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
err = idx.RemoveID(i.k)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
}
d.removed++
return i.bucket.Delete(i.k)
}
func (d *deleteSink) flush() error {
if d.removed == 0 {
return ErrNotFound
}
return nil
}
func (d *deleteSink) readOnly() bool {
return false
}
func newCountSink(node Node, kind interface{}) (*countSink, error) {
ref := reflect.ValueOf(kind)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &countSink{
node: node,
ref: ref,
}, nil
}
type countSink struct {
node Node
ref reflect.Value
counter int
}
func (c *countSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(c.ref).Type())
}
func (c *countSink) bucketName() string {
return reflect.Indirect(c.ref).Type().Name()
}
func (c *countSink) add(i *item) error {
c.counter++
return nil
}
func (c *countSink) flush() error {
return nil
}
func (c *countSink) readOnly() bool {
return true
}
func newRawSink() *rawSink {
return &rawSink{}
}
type rawSink struct {
results [][]byte
execFn func([]byte, []byte) error
}
func (r *rawSink) add(i *item) error {
if r.execFn != nil {
err := r.execFn(i.k, i.v)
if err != nil {
return err
}
} else {
r.results = append(r.results, i.v)
}
return nil
}
func (r *rawSink) bucketName() string {
return ""
}
func (r *rawSink) flush() error {
return nil
}
func (r *rawSink) readOnly() bool {
return true
}
func newEachSink(to interface{}) (*eachSink, error) {
ref := reflect.ValueOf(to)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return nil, ErrStructPtrNeeded
}
return &eachSink{
ref: ref,
}, nil
}
type eachSink struct {
ref reflect.Value
execFn func(interface{}) error
}
func (e *eachSink) elem() reflect.Value {
return reflect.New(reflect.Indirect(e.ref).Type())
}
func (e *eachSink) bucketName() string {
return reflect.Indirect(e.ref).Type().Name()
}
func (e *eachSink) add(i *item) error {
return e.execFn(i.value.Interface())
}
func (e *eachSink) flush() error {
return nil
}
func (e *eachSink) readOnly() bool {
return true
}

22
vendor/github.com/asdine/storm/v3/sink_sorter_swap.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
// +build !go1.8
package storm
import "reflect"
func (s *sorter) Swap(i, j int) {
// skip if we encountered an earlier error
select {
case <-s.done:
return
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
x, y := ssink.slice().Index(i).Interface(), ssink.slice().Index(j).Interface()
ssink.slice().Index(i).Set(reflect.ValueOf(y))
ssink.slice().Index(j).Set(reflect.ValueOf(x))
} else {
s.list[i], s.list[j] = s.list[j], s.list[i]
}
}

View File

@@ -0,0 +1,20 @@
// +build go1.8
package storm
import "reflect"
func (s *sorter) Swap(i, j int) {
// skip if we encountered an earlier error
select {
case <-s.done:
return
default:
}
if ssink, ok := s.sink.(sliceSink); ok {
reflect.Swapper(ssink.slice().Interface())(i, j)
} else {
s.list[i], s.list[j] = s.list[j], s.list[i]
}
}

425
vendor/github.com/asdine/storm/v3/store.go generated vendored Normal file
View File

@@ -0,0 +1,425 @@
package storm
import (
"bytes"
"reflect"
"github.com/asdine/storm/v3/index"
"github.com/asdine/storm/v3/q"
bolt "go.etcd.io/bbolt"
)
// TypeStore stores user defined types in BoltDB.
type TypeStore interface {
Finder
// Init creates the indexes and buckets for a given structure
Init(data interface{}) error
// ReIndex rebuilds all the indexes of a bucket
ReIndex(data interface{}) error
// Save a structure
Save(data interface{}) error
// Update a structure
Update(data interface{}) error
// UpdateField updates a single field
UpdateField(data interface{}, fieldName string, value interface{}) error
// Drop a bucket
Drop(data interface{}) error
// DeleteStruct deletes a structure from the associated bucket
DeleteStruct(data interface{}) error
}
// Init creates the indexes and buckets for a given structure
func (n *node) Init(data interface{}) error {
v := reflect.ValueOf(data)
cfg, err := extract(&v)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.init(tx, cfg)
})
}
func (n *node) init(tx *bolt.Tx, cfg *structConfig) error {
bucket, err := n.CreateBucketIfNotExists(tx, cfg.Name)
if err != nil {
return err
}
// save node configuration in the bucket
_, err = newMeta(bucket, n)
if err != nil {
return err
}
for fieldName, fieldCfg := range cfg.Fields {
if fieldCfg.Index == "" {
continue
}
switch fieldCfg.Index {
case tagUniqueIdx:
_, err = index.NewUniqueIndex(bucket, []byte(indexPrefix+fieldName))
case tagIdx:
_, err = index.NewListIndex(bucket, []byte(indexPrefix+fieldName))
default:
err = ErrIdxNotFound
}
if err != nil {
return err
}
}
return nil
}
func (n *node) ReIndex(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.reIndex(tx, data, cfg)
})
}
func (n *node) reIndex(tx *bolt.Tx, data interface{}, cfg *structConfig) error {
root := n.WithTransaction(tx)
nodes := root.From(cfg.Name).PrefixScan(indexPrefix)
bucket := root.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
for _, node := range nodes {
buckets := node.Bucket()
name := buckets[len(buckets)-1]
err := bucket.DeleteBucket([]byte(name))
if err != nil {
return err
}
}
total, err := root.Count(data)
if err != nil {
return err
}
for i := 0; i < total; i++ {
err = root.Select(q.True()).Skip(i).First(data)
if err != nil {
return err
}
err = root.Update(data)
if err != nil {
return err
}
}
return nil
}
// Save a structure
func (n *node) Save(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
if cfg.ID.IsZero {
if !cfg.ID.IsInteger || !cfg.ID.Increment {
return ErrZeroID
}
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.save(tx, cfg, data, false)
})
}
func (n *node) save(tx *bolt.Tx, cfg *structConfig, data interface{}, update bool) error {
bucket, err := n.CreateBucketIfNotExists(tx, cfg.Name)
if err != nil {
return err
}
// save node configuration in the bucket
meta, err := newMeta(bucket, n)
if err != nil {
return err
}
if cfg.ID.IsZero {
err = meta.increment(cfg.ID)
if err != nil {
return err
}
}
id, err := toBytes(cfg.ID.Value.Interface(), n.codec)
if err != nil {
return err
}
for fieldName, fieldCfg := range cfg.Fields {
if !update && !fieldCfg.IsID && fieldCfg.Increment && fieldCfg.IsInteger && fieldCfg.IsZero {
err = meta.increment(fieldCfg)
if err != nil {
return err
}
}
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
if update && fieldCfg.IsZero && !fieldCfg.ForceUpdate {
continue
}
if fieldCfg.IsZero {
err = idx.RemoveID(id)
if err != nil {
return err
}
continue
}
value, err := toBytes(fieldCfg.Value.Interface(), n.codec)
if err != nil {
return err
}
var found bool
idsSaved, err := idx.All(value, nil)
if err != nil {
return err
}
for _, idSaved := range idsSaved {
if bytes.Compare(idSaved, id) == 0 {
found = true
break
}
}
if found {
continue
}
err = idx.RemoveID(id)
if err != nil {
return err
}
err = idx.Add(value, id)
if err != nil {
if err == index.ErrAlreadyExists {
return ErrAlreadyExists
}
return err
}
}
raw, err := n.codec.Marshal(data)
if err != nil {
return err
}
return bucket.Put(id, raw)
}
// Update a structure
func (n *node) Update(data interface{}) error {
return n.update(data, func(ref *reflect.Value, current *reflect.Value, cfg *structConfig) error {
numfield := ref.NumField()
for i := 0; i < numfield; i++ {
f := ref.Field(i)
if ref.Type().Field(i).PkgPath != "" {
continue
}
zero := reflect.Zero(f.Type()).Interface()
actual := f.Interface()
if !reflect.DeepEqual(actual, zero) {
cf := current.Field(i)
cf.Set(f)
idxInfo, ok := cfg.Fields[ref.Type().Field(i).Name]
if ok {
idxInfo.Value = &cf
}
}
}
return nil
})
}
// UpdateField updates a single field
func (n *node) UpdateField(data interface{}, fieldName string, value interface{}) error {
return n.update(data, func(ref *reflect.Value, current *reflect.Value, cfg *structConfig) error {
f := current.FieldByName(fieldName)
if !f.IsValid() {
return ErrNotFound
}
tf, _ := current.Type().FieldByName(fieldName)
if tf.PkgPath != "" {
return ErrNotFound
}
v := reflect.ValueOf(value)
if v.Kind() != f.Kind() {
return ErrIncompatibleValue
}
f.Set(v)
idxInfo, ok := cfg.Fields[fieldName]
if ok {
idxInfo.Value = &f
idxInfo.IsZero = isZero(idxInfo.Value)
idxInfo.ForceUpdate = true
}
return nil
})
}
func (n *node) update(data interface{}, fn func(*reflect.Value, *reflect.Value, *structConfig) error) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
if cfg.ID.IsZero {
return ErrNoID
}
current := reflect.New(reflect.Indirect(ref).Type())
return n.readWriteTx(func(tx *bolt.Tx) error {
err = n.WithTransaction(tx).One(cfg.ID.Name, cfg.ID.Value.Interface(), current.Interface())
if err != nil {
return err
}
ref := reflect.ValueOf(data).Elem()
cref := current.Elem()
err = fn(&ref, &cref, cfg)
if err != nil {
return err
}
return n.save(tx, cfg, current.Interface(), true)
})
}
// Drop a bucket
func (n *node) Drop(data interface{}) error {
var bucketName string
v := reflect.ValueOf(data)
if v.Kind() != reflect.String {
info, err := extract(&v)
if err != nil {
return err
}
bucketName = info.Name
} else {
bucketName = v.Interface().(string)
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.drop(tx, bucketName)
})
}
func (n *node) drop(tx *bolt.Tx, bucketName string) error {
bucket := n.GetBucket(tx)
if bucket == nil {
return tx.DeleteBucket([]byte(bucketName))
}
return bucket.DeleteBucket([]byte(bucketName))
}
// DeleteStruct deletes a structure from the associated bucket
func (n *node) DeleteStruct(data interface{}) error {
ref := reflect.ValueOf(data)
if !ref.IsValid() || ref.Kind() != reflect.Ptr || ref.Elem().Kind() != reflect.Struct {
return ErrStructPtrNeeded
}
cfg, err := extract(&ref)
if err != nil {
return err
}
id, err := toBytes(cfg.ID.Value.Interface(), n.codec)
if err != nil {
return err
}
return n.readWriteTx(func(tx *bolt.Tx) error {
return n.deleteStruct(tx, cfg, id)
})
}
func (n *node) deleteStruct(tx *bolt.Tx, cfg *structConfig, id []byte) error {
bucket := n.GetBucket(tx, cfg.Name)
if bucket == nil {
return ErrNotFound
}
for fieldName, fieldCfg := range cfg.Fields {
if fieldCfg.Index == "" {
continue
}
idx, err := getIndex(bucket, fieldCfg.Index, fieldName)
if err != nil {
return err
}
err = idx.RemoveID(id)
if err != nil {
if err == index.ErrNotFound {
return ErrNotFound
}
return err
}
}
raw := bucket.Get(id)
if raw == nil {
return ErrNotFound
}
return bucket.Delete(id)
}

142
vendor/github.com/asdine/storm/v3/storm.go generated vendored Normal file
View File

@@ -0,0 +1,142 @@
package storm
import (
"bytes"
"encoding/binary"
"time"
"github.com/asdine/storm/v3/codec"
"github.com/asdine/storm/v3/codec/json"
bolt "go.etcd.io/bbolt"
)
const (
dbinfo = "__storm_db"
metadataBucket = "__storm_metadata"
)
// Defaults to json
var defaultCodec = json.Codec
// Open opens a database at the given path with optional Storm options.
func Open(path string, stormOptions ...func(*Options) error) (*DB, error) {
var err error
var opts Options
for _, option := range stormOptions {
if err = option(&opts); err != nil {
return nil, err
}
}
s := DB{
Bolt: opts.bolt,
}
n := node{
s: &s,
codec: opts.codec,
batchMode: opts.batchMode,
rootBucket: opts.rootBucket,
}
if n.codec == nil {
n.codec = defaultCodec
}
if opts.boltMode == 0 {
opts.boltMode = 0600
}
if opts.boltOptions == nil {
opts.boltOptions = &bolt.Options{Timeout: 1 * time.Second}
}
s.Node = &n
// skip if UseDB option is used
if s.Bolt == nil {
s.Bolt, err = bolt.Open(path, opts.boltMode, opts.boltOptions)
if err != nil {
return nil, err
}
}
err = s.checkVersion()
if err != nil {
return nil, err
}
return &s, nil
}
// DB is the wrapper around BoltDB. It contains an instance of BoltDB and uses it to perform all the
// needed operations
type DB struct {
// The root node that points to the root bucket.
Node
// Bolt is still easily accessible
Bolt *bolt.DB
}
// Close the database
func (s *DB) Close() error {
return s.Bolt.Close()
}
func (s *DB) checkVersion() error {
var v string
err := s.Get(dbinfo, "version", &v)
if err != nil && err != ErrNotFound {
return err
}
// for now, we only set the current version if it doesn't exist.
// v1 and v2 database files are compatible.
if v == "" {
return s.Set(dbinfo, "version", Version)
}
return nil
}
// toBytes turns an interface into a slice of bytes
func toBytes(key interface{}, codec codec.MarshalUnmarshaler) ([]byte, error) {
if key == nil {
return nil, nil
}
switch t := key.(type) {
case []byte:
return t, nil
case string:
return []byte(t), nil
case int:
return numbertob(int64(t))
case uint:
return numbertob(uint64(t))
case int8, int16, int32, int64, uint8, uint16, uint32, uint64:
return numbertob(t)
default:
return codec.Marshal(key)
}
}
func numbertob(v interface{}) ([]byte, error) {
var buf bytes.Buffer
err := binary.Write(&buf, binary.BigEndian, v)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func numberfromb(raw []byte) (int64, error) {
r := bytes.NewReader(raw)
var to int64
err := binary.Read(r, binary.BigEndian, &to)
if err != nil {
return 0, err
}
return to, nil
}

52
vendor/github.com/asdine/storm/v3/transaction.go generated vendored Normal file
View File

@@ -0,0 +1,52 @@
package storm
import bolt "go.etcd.io/bbolt"
// Tx is a transaction.
type Tx interface {
// Commit writes all changes to disk.
Commit() error
// Rollback closes the transaction and ignores all previous updates.
Rollback() error
}
// Begin starts a new transaction.
func (n node) Begin(writable bool) (Node, error) {
var err error
n.tx, err = n.s.Bolt.Begin(writable)
if err != nil {
return nil, err
}
return &n, nil
}
// Rollback closes the transaction and ignores all previous updates.
func (n *node) Rollback() error {
if n.tx == nil {
return ErrNotInTransaction
}
err := n.tx.Rollback()
if err == bolt.ErrTxClosed {
return ErrNotInTransaction
}
return err
}
// Commit writes all changes to disk.
func (n *node) Commit() error {
if n.tx == nil {
return ErrNotInTransaction
}
err := n.tx.Commit()
if err == bolt.ErrTxClosed {
return ErrNotInTransaction
}
return err
}

4
vendor/github.com/asdine/storm/v3/version.go generated vendored Normal file
View File

@@ -0,0 +1,4 @@
package storm
// Version of Storm
const Version = "2.0.0"

15
vendor/github.com/davecgh/go-spew/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

145
vendor/github.com/davecgh/go-spew/spew/bypass.go generated vendored Normal file
View File

@@ -0,0 +1,145 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// Go versions prior to 1.4 are disabled because they use a different layout
// for interfaces which make the implementation of unsafeReflectValue more complex.
// +build !js,!appengine,!safe,!disableunsafe,go1.4
package spew
import (
"reflect"
"unsafe"
)
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = false
// ptrSize is the size of a pointer on the current arch.
ptrSize = unsafe.Sizeof((*byte)(nil))
)
type flag uintptr
var (
// flagRO indicates whether the value field of a reflect.Value
// is read-only.
flagRO flag
// flagAddr indicates whether the address of the reflect.Value's
// value may be taken.
flagAddr flag
)
// flagKindMask holds the bits that make up the kind
// part of the flags field. In all the supported versions,
// it is in the lower 5 bits.
const flagKindMask = flag(0x1f)
// Different versions of Go have used different
// bit layouts for the flags type. This table
// records the known combinations.
var okFlags = []struct {
ro, addr flag
}{{
// From Go 1.4 to 1.5
ro: 1 << 5,
addr: 1 << 7,
}, {
// Up to Go tip.
ro: 1<<5 | 1<<6,
addr: 1 << 8,
}}
var flagValOffset = func() uintptr {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
return field.Offset
}()
// flagField returns a pointer to the flag field of a reflect.Value.
func flagField(v *reflect.Value) *flag {
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
// the typical safety restrictions preventing access to unaddressable and
// unexported data. It works by digging the raw pointer to the underlying
// value out of the protected value and generating a new unprotected (unsafe)
// reflect.Value to it.
//
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) reflect.Value {
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
return v
}
flagFieldPtr := flagField(&v)
*flagFieldPtr &^= flagRO
*flagFieldPtr |= flagAddr
return v
}
// Sanity checks against future reflect package changes
// to the type or semantics of the Value.flag field.
func init() {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
panic("reflect.Value flag field has changed kind")
}
type t0 int
var t struct {
A t0
// t0 will have flagEmbedRO set.
t0
// a will have flagStickyRO set
a t0
}
vA := reflect.ValueOf(t).FieldByName("A")
va := reflect.ValueOf(t).FieldByName("a")
vt0 := reflect.ValueOf(t).FieldByName("t0")
// Infer flagRO from the difference between the flags
// for the (otherwise identical) fields in t.
flagPublic := *flagField(&vA)
flagWithRO := *flagField(&va) | *flagField(&vt0)
flagRO = flagPublic ^ flagWithRO
// Infer flagAddr from the difference between a value
// taken from a pointer and not.
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
flagNoPtr := *flagField(&vA)
flagPtr := *flagField(&vPtrA)
flagAddr = flagNoPtr ^ flagPtr
// Check that the inferred flags tally with one of the known versions.
for _, f := range okFlags {
if flagRO == f.ro && flagAddr == f.addr {
return
}
}
panic("reflect.Value read-only flag has changed semantics")
}

38
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is running on Google App Engine, compiled by GopherJS, or
// "-tags safe" is added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe !go1.4
package spew
import "reflect"
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = true
)
// unsafeReflectValue typically converts the passed reflect.Value into a one
// that bypasses the typical safety restrictions preventing access to
// unaddressable and unexported data. However, doing this relies on access to
// the unsafe package. This is a stub version which simply returns the passed
// reflect.Value when the unsafe package is not available.
func unsafeReflectValue(v reflect.Value) reflect.Value {
return v
}

341
vendor/github.com/davecgh/go-spew/spew/common.go generated vendored Normal file
View File

@@ -0,0 +1,341 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"reflect"
"sort"
"strconv"
)
// Some constants in the form of bytes to avoid string overhead. This mirrors
// the technique used in the fmt package.
var (
panicBytes = []byte("(PANIC=")
plusBytes = []byte("+")
iBytes = []byte("i")
trueBytes = []byte("true")
falseBytes = []byte("false")
interfaceBytes = []byte("(interface {})")
commaNewlineBytes = []byte(",\n")
newlineBytes = []byte("\n")
openBraceBytes = []byte("{")
openBraceNewlineBytes = []byte("{\n")
closeBraceBytes = []byte("}")
asteriskBytes = []byte("*")
colonBytes = []byte(":")
colonSpaceBytes = []byte(": ")
openParenBytes = []byte("(")
closeParenBytes = []byte(")")
spaceBytes = []byte(" ")
pointerChainBytes = []byte("->")
nilAngleBytes = []byte("<nil>")
maxNewlineBytes = []byte("<max depth reached>\n")
maxShortBytes = []byte("<max>")
circularBytes = []byte("<already shown>")
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("map[")
closeMapBytes = []byte("]")
lenEqualsBytes = []byte("len=")
capEqualsBytes = []byte("cap=")
)
// hexDigits is used to map a decimal value to a hex digit.
var hexDigits = "0123456789abcdef"
// catchPanic handles any panics that might occur during the handleMethods
// calls.
func catchPanic(w io.Writer, v reflect.Value) {
if err := recover(); err != nil {
w.Write(panicBytes)
fmt.Fprintf(w, "%v", err)
w.Write(closeParenBytes)
}
}
// handleMethods attempts to call the Error and String methods on the underlying
// type the passed reflect.Value represents and outputes the result to Writer w.
//
// It handles panics in any called methods by catching and displaying the error
// as the formatted value.
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
// We need an interface to check if the type implements the error or
// Stringer interface. However, the reflect package won't give us an
// interface on certain things like unexported struct fields in order
// to enforce visibility rules. We use unsafe, when it's available,
// to bypass these restrictions since this package does not mutate the
// values.
if !v.CanInterface() {
if UnsafeDisabled {
return false
}
v = unsafeReflectValue(v)
}
// Choose whether or not to do error and Stringer interface lookups against
// the base type or a pointer to the base type depending on settings.
// Technically calling one of these methods with a pointer receiver can
// mutate the value, however, types which choose to satisify an error or
// Stringer interface with a pointer receiver should not be mutating their
// state inside these interface methods.
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
v = unsafeReflectValue(v)
}
if v.CanAddr() {
v = v.Addr()
}
// Is it an error or Stringer?
switch iface := v.Interface().(type) {
case error:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.Error()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.Error()))
return true
case fmt.Stringer:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.String()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.String()))
return true
}
return false
}
// printBool outputs a boolean value as true or false to Writer w.
func printBool(w io.Writer, val bool) {
if val {
w.Write(trueBytes)
} else {
w.Write(falseBytes)
}
}
// printInt outputs a signed integer value to Writer w.
func printInt(w io.Writer, val int64, base int) {
w.Write([]byte(strconv.FormatInt(val, base)))
}
// printUint outputs an unsigned integer value to Writer w.
func printUint(w io.Writer, val uint64, base int) {
w.Write([]byte(strconv.FormatUint(val, base)))
}
// printFloat outputs a floating point value using the specified precision,
// which is expected to be 32 or 64bit, to Writer w.
func printFloat(w io.Writer, val float64, precision int) {
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
}
// printComplex outputs a complex value using the specified float precision
// for the real and imaginary parts to Writer w.
func printComplex(w io.Writer, c complex128, floatPrecision int) {
r := real(c)
w.Write(openParenBytes)
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
i := imag(c)
if i >= 0 {
w.Write(plusBytes)
}
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
w.Write(iBytes)
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.
num := uint64(p)
if num == 0 {
w.Write(nilAngleBytes)
return
}
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
buf := make([]byte, 18)
// It's simpler to construct the hex string right to left.
base := uint64(16)
i := len(buf) - 1
for num >= base {
buf[i] = hexDigits[num%base]
num /= base
i--
}
buf[i] = hexDigits[num]
// Add '0x' prefix.
i--
buf[i] = 'x'
i--
buf[i] = '0'
// Strip unused leading bytes.
buf = buf[i:]
w.Write(buf)
}
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
// elements to be sorted.
type valuesSorter struct {
values []reflect.Value
strings []string // either nil or same len and values
cs *ConfigState
}
// newValuesSorter initializes a valuesSorter instance, which holds a set of
// surrogate keys on which the data should be sorted. It uses flags in
// ConfigState to decide if and how to populate those surrogate keys.
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
vs := &valuesSorter{values: values, cs: cs}
if canSortSimply(vs.values[0].Kind()) {
return vs
}
if !cs.DisableMethods {
vs.strings = make([]string, len(values))
for i := range vs.values {
b := bytes.Buffer{}
if !handleMethods(cs, &b, vs.values[i]) {
vs.strings = nil
break
}
vs.strings[i] = b.String()
}
}
if vs.strings == nil && cs.SpewKeys {
vs.strings = make([]string, len(values))
for i := range vs.values {
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
}
}
return vs
}
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
// directly, or whether it should be considered for sorting by surrogate keys
// (if the ConfigState allows it).
func canSortSimply(kind reflect.Kind) bool {
// This switch parallels valueSortLess, except for the default case.
switch kind {
case reflect.Bool:
return true
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return true
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return true
case reflect.Float32, reflect.Float64:
return true
case reflect.String:
return true
case reflect.Uintptr:
return true
case reflect.Array:
return true
}
return false
}
// Len returns the number of values in the slice. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Len() int {
return len(s.values)
}
// Swap swaps the values at the passed indices. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
if s.strings != nil {
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
}
}
// valueSortLess returns whether the first value should sort before the second
// value. It is used by valueSorter.Less as part of the sort.Interface
// implementation.
func valueSortLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Bool:
return !a.Bool() && b.Bool()
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return a.Int() < b.Int()
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return a.Uint() < b.Uint()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.String:
return a.String() < b.String()
case reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Array:
// Compare the contents of both arrays.
l := a.Len()
for i := 0; i < l; i++ {
av := a.Index(i)
bv := b.Index(i)
if av.Interface() == bv.Interface() {
continue
}
return valueSortLess(av, bv)
}
}
return a.String() < b.String()
}
// Less returns whether the value at index i should sort before the
// value at index j. It is part of the sort.Interface implementation.
func (s *valuesSorter) Less(i, j int) bool {
if s.strings == nil {
return valueSortLess(s.values[i], s.values[j])
}
return s.strings[i] < s.strings[j]
}
// sortValues is a sort function that handles both native types and any type that
// can be converted to error or Stringer. Other inputs are sorted according to
// their Value.String() value to ensure display stability.
func sortValues(values []reflect.Value, cs *ConfigState) {
if len(values) == 0 {
return
}
sort.Sort(newValuesSorter(values, cs))
}

306
vendor/github.com/davecgh/go-spew/spew/config.go generated vendored Normal file
View File

@@ -0,0 +1,306 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"os"
)
// ConfigState houses the configuration options used by spew to format and
// display values. There is a global instance, Config, that is used to control
// all top-level Formatter and Dump functionality. Each ConfigState instance
// provides methods equivalent to the top-level functions.
//
// The zero value for ConfigState provides no indentation. You would typically
// want to set it to a space or a tab.
//
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
// with default settings. See the documentation of NewDefaultConfig for default
// values.
type ConfigState struct {
// Indent specifies the string to use for each indentation level. The
// global config instance that all top-level functions use set this to a
// single space by default. If you would like more indentation, you might
// set this to a tab with "\t" or perhaps two spaces with " ".
Indent string
// MaxDepth controls the maximum number of levels to descend into nested
// data structures. The default, 0, means there is no limit.
//
// NOTE: Circular data structures are properly detected, so it is not
// necessary to set this value unless you specifically want to limit deeply
// nested data structures.
MaxDepth int
// DisableMethods specifies whether or not error and Stringer interfaces are
// invoked for types that implement them.
DisableMethods bool
// DisablePointerMethods specifies whether or not to check for and invoke
// error and Stringer interfaces on types which only accept a pointer
// receiver when the current type is not a pointer.
//
// NOTE: This might be an unsafe action since calling one of these methods
// with a pointer receiver could technically mutate the value, however,
// in practice, types which choose to satisify an error or Stringer
// interface with a pointer receiver should not be mutating their state
// inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as
// Google App Engine or with the "safe" build tag specified.
DisablePointerMethods bool
// DisablePointerAddresses specifies whether to disable the printing of
// pointer addresses. This is useful when diffing data structures in tests.
DisablePointerAddresses bool
// DisableCapacities specifies whether to disable the printing of capacities
// for arrays, slices, maps and channels. This is useful when diffing
// data structures in tests.
DisableCapacities bool
// ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer
// interface and return immediately instead of continuing to recurse into
// the internals of the data type.
//
// NOTE: This flag does not have any effect if method invocation is disabled
// via the DisableMethods or DisablePointerMethods options.
ContinueOnMethod bool
// SortKeys specifies map keys should be sorted before being printed. Use
// this to have a more deterministic, diffable output. Note that only
// native types (bool, int, uint, floats, uintptr and string) and types
// that support the error or Stringer interfaces (if methods are
// enabled) are supported, with other types sorted according to the
// reflect.Value.String() output which guarantees display stability.
SortKeys bool
// SpewKeys specifies that, as a last resort attempt, map keys should
// be spewed to strings and sorted by those strings. This is only
// considered if SortKeys is true.
SpewKeys bool
}
// Config is the active configuration of the top-level functions.
// The configuration can be changed by modifying the contents of spew.Config.
var Config = ConfigState{Indent: " "}
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the formatted string as a value that satisfies error. See NewFormatter
// for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, c.convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, c.convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, c.convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a Formatter interface returned by c.NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, c.convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
return fmt.Print(c.convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, c.convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
return fmt.Println(c.convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprint(a ...interface{}) string {
return fmt.Sprint(c.convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, c.convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a Formatter interface returned by c.NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintln(a ...interface{}) string {
return fmt.Sprintln(c.convertArgs(a)...)
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
c.Printf, c.Println, or c.Printf.
*/
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(c, v)
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
fdump(c, w, a...)
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by modifying the public members
of c. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func (c *ConfigState) Dump(a ...interface{}) {
fdump(c, os.Stdout, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func (c *ConfigState) Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(c, &buf, a...)
return buf.String()
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a spew Formatter interface using
// the ConfigState associated with s.
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = newFormatter(c, arg)
}
return formatters
}
// NewDefaultConfig returns a ConfigState with the following default settings.
//
// Indent: " "
// MaxDepth: 0
// DisableMethods: false
// DisablePointerMethods: false
// ContinueOnMethod: false
// SortKeys: false
func NewDefaultConfig() *ConfigState {
return &ConfigState{Indent: " "}
}

211
vendor/github.com/davecgh/go-spew/spew/doc.go generated vendored Normal file
View File

@@ -0,0 +1,211 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
Package spew implements a deep pretty printer for Go data structures to aid in
debugging.
A quick overview of the additional features spew provides over the built-in
printing facilities for Go data types are as follows:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output (only when using
Dump style)
There are two different approaches spew allows for dumping Go data structures:
* Dump style which prints with newlines, customizable indentation,
and additional debug information such as types and all pointer addresses
used to indirect to the final value
* A custom Formatter interface that integrates cleanly with the standard fmt
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
similar to the default %v while providing the additional functionality
outlined above and passing unsupported format verbs such as %x and %q
along to fmt
Quick Start
This section demonstrates how to quickly get started with spew. See the
sections below for further details on formatting and configuration options.
To dump a variable with full newlines, indentation, type, and pointer
information use Dump, Fdump, or Sdump:
spew.Dump(myVar1, myVar2, ...)
spew.Fdump(someWriter, myVar1, myVar2, ...)
str := spew.Sdump(myVar1, myVar2, ...)
Alternatively, if you would prefer to use format strings with a compacted inline
printing style, use the convenience wrappers Printf, Fprintf, etc with
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
%#+v (adds types and pointer addresses):
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
Configuration Options
Configuration of spew is handled by fields in the ConfigState type. For
convenience, all of the top-level functions use a global state available
via the spew.Config global.
It is also possible to create a ConfigState instance that provides methods
equivalent to the top-level functions. This allows concurrent configuration
options. See the ConfigState documentation for more details.
The following configuration options are available:
* Indent
String to use for each indentation level for Dump functions.
It is a single space by default. A popular alternative is "\t".
* MaxDepth
Maximum number of levels to descend into nested data structures.
There is no limit by default.
* DisableMethods
Disables invocation of error and Stringer interface methods.
Method invocation is enabled by default.
* DisablePointerMethods
Disables invocation of error and Stringer interface methods on types
which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default.
* DisablePointerAddresses
DisablePointerAddresses specifies whether to disable the printing of
pointer addresses. This is useful when diffing data structures in tests.
* DisableCapacities
DisableCapacities specifies whether to disable the printing of
capacities for arrays, slices, maps and channels. This is useful when
diffing data structures in tests.
* ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default.
* SortKeys
Specifies map keys should be sorted before being printed. Use
this to have a more deterministic, diffable output. Note that
only native types (bool, int, uint, floats, uintptr and string)
and types which implement error or Stringer interfaces are
supported with other types sorted according to the
reflect.Value.String() output which guarantees display
stability. Natural map order is used by default.
* SpewKeys
Specifies that, as a last resort attempt, map keys should be
spewed to strings and sorted by those strings. This is only
considered if SortKeys is true.
Dump Usage
Simply call spew.Dump with a list of variables you want to dump:
spew.Dump(myVar1, myVar2, ...)
You may also call spew.Fdump if you would prefer to output to an arbitrary
io.Writer. For example, to dump to standard error:
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
A third option is to call spew.Sdump to get the formatted output as a string:
str := spew.Sdump(myVar1, myVar2, ...)
Sample Dump Output
See the Dump example for details on the setup of the types and variables being
shown here.
(main.Foo) {
unexportedField: (*main.Bar)(0xf84002e210)({
flag: (main.Flag) flagTwo,
data: (uintptr) <nil>
}),
ExportedField: (map[interface {}]interface {}) (len=1) {
(string) (len=3) "one": (bool) true
}
}
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
command as shown.
([]uint8) (len=32 cap=32) {
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
00000020 31 32 |12|
}
Custom Formatter
Spew provides a custom formatter that implements the fmt.Formatter interface
so that it integrates cleanly with standard fmt package printing functions. The
formatter is useful for inline printing of smaller data types similar to the
standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Custom Formatter Usage
The simplest way to make use of the spew custom formatter is to call one of the
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
functions have syntax you are most likely already familiar with:
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Println(myVar, myVar2)
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
See the Index for the full list convenience functions.
Sample Formatter Output
Double pointer to a uint8:
%v: <**>5
%+v: <**>(0xf8400420d0->0xf8400420c8)5
%#v: (**uint8)5
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
Pointer to circular struct with a uint8 field and a pointer to itself:
%v: <*>{1 <*><shown>}
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
See the Printf example for details on the setup of variables being shown
here.
Errors
Since it is possible for custom Stringer/error interfaces to panic, spew
detects them and handles them internally by printing the panic information
inline with the output. Since spew is intended to provide deep pretty printing
capabilities on structures, it intentionally does not return any errors.
*/
package spew

509
vendor/github.com/davecgh/go-spew/spew/dump.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"os"
"reflect"
"regexp"
"strconv"
"strings"
)
var (
// uint8Type is a reflect.Type representing a uint8. It is used to
// convert cgo types to uint8 slices for hexdumping.
uint8Type = reflect.TypeOf(uint8(0))
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
)
// dumpState contains information about the state of a dump operation.
type dumpState struct {
w io.Writer
depth int
pointers map[uintptr]int
ignoreNextType bool
ignoreNextIndent bool
cs *ConfigState
}
// indent performs indentation according to the depth level and cs.Indent
// option.
func (d *dumpState) indent() {
if d.ignoreNextIndent {
d.ignoreNextIndent = false
return
}
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
}
// unpackValue returns values inside of non-nil interfaces when possible.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface && !v.IsNil() {
v = v.Elem()
}
return v
}
// dumpPtr handles formatting of pointers by indirecting them as necessary.
func (d *dumpState) dumpPtr(v reflect.Value) {
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range d.pointers {
if depth >= d.depth {
delete(d.pointers, k)
}
}
// Keep list of all dereferenced pointers to show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by dereferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
cycleFound = true
indirects--
break
}
d.pointers[addr] = d.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type information.
d.w.Write(openParenBytes)
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
d.w.Write([]byte(ve.Type().String()))
d.w.Write(closeParenBytes)
// Display pointer information.
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
d.w.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
d.w.Write(pointerChainBytes)
}
printHexPtr(d.w, addr)
}
d.w.Write(closeParenBytes)
}
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound:
d.w.Write(nilAngleBytes)
case cycleFound:
d.w.Write(circularBytes)
default:
d.ignoreNextType = true
d.dump(ve)
}
d.w.Write(closeParenBytes)
}
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
// reflection) arrays and slices are dumped in hexdump -C fashion.
func (d *dumpState) dumpSlice(v reflect.Value) {
// Determine whether this type should be hex dumped or not. Also,
// for types which should be hexdumped, try to use the underlying data
// first, then fall back to trying to convert them to a uint8 slice.
var buf []uint8
doConvert := false
doHexDump := false
numEntries := v.Len()
if numEntries > 0 {
vt := v.Index(0).Type()
vts := vt.String()
switch {
// C types that need to be converted.
case cCharRE.MatchString(vts):
fallthrough
case cUnsignedCharRE.MatchString(vts):
fallthrough
case cUint8tCharRE.MatchString(vts):
doConvert = true
// Try to use existing uint8 slices and fall back to converting
// and copying if that fails.
case vt.Kind() == reflect.Uint8:
// We need an addressable interface to convert the type
// to a byte slice. However, the reflect package won't
// give us an interface on certain things like
// unexported struct fields in order to enforce
// visibility rules. We use unsafe, when available, to
// bypass these restrictions since this package does not
// mutate the values.
vs := v
if !vs.CanInterface() || !vs.CanAddr() {
vs = unsafeReflectValue(vs)
}
if !UnsafeDisabled {
vs = vs.Slice(0, numEntries)
// Use the existing uint8 slice if it can be
// type asserted.
iface := vs.Interface()
if slice, ok := iface.([]uint8); ok {
buf = slice
doHexDump = true
break
}
}
// The underlying data needs to be converted if it can't
// be type asserted to a uint8 slice.
doConvert = true
}
// Copy and convert the underlying type if needed.
if doConvert && vt.ConvertibleTo(uint8Type) {
// Convert and copy each element into a uint8 byte
// slice.
buf = make([]uint8, numEntries)
for i := 0; i < numEntries; i++ {
vv := v.Index(i)
buf[i] = uint8(vv.Convert(uint8Type).Uint())
}
doHexDump = true
}
}
// Hexdump the entire slice as needed.
if doHexDump {
indent := strings.Repeat(d.cs.Indent, d.depth)
str := indent + hex.Dump(buf)
str = strings.Replace(str, "\n", "\n"+indent, -1)
str = strings.TrimRight(str, d.cs.Indent)
d.w.Write([]byte(str))
return
}
// Recursively call dump for each item.
for i := 0; i < numEntries; i++ {
d.dump(d.unpackValue(v.Index(i)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
// dump is the main workhorse for dumping a value. It uses the passed reflect
// value to figure out what kind of object we are dealing with and formats it
// appropriately. It is a recursive function, however circular data structures
// are detected and handled properly.
func (d *dumpState) dump(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
d.w.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
d.indent()
d.dumpPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !d.ignoreNextType {
d.indent()
d.w.Write(openParenBytes)
d.w.Write([]byte(v.Type().String()))
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
d.ignoreNextType = false
// Display length and capacity if the built-in len and cap functions
// work with the value's kind and the len/cap itself is non-zero.
valueLen, valueCap := 0, 0
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.Chan:
valueLen, valueCap = v.Len(), v.Cap()
case reflect.Map, reflect.String:
valueLen = v.Len()
}
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
d.w.Write(openParenBytes)
if valueLen != 0 {
d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10)
}
if !d.cs.DisableCapacities && valueCap != 0 {
if valueLen != 0 {
d.w.Write(spaceBytes)
}
d.w.Write(capEqualsBytes)
printInt(d.w, int64(valueCap), 10)
}
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
// Call Stringer/error interfaces if they exist and the handle methods flag
// is enabled
if !d.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(d.cs, d.w, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(d.w, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(d.w, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(d.w, v.Uint(), 10)
case reflect.Float32:
printFloat(d.w, v.Float(), 32)
case reflect.Float64:
printFloat(d.w, v.Float(), 64)
case reflect.Complex64:
printComplex(d.w, v.Complex(), 32)
case reflect.Complex128:
printComplex(d.w, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
d.dumpSlice(v)
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.String:
d.w.Write([]byte(strconv.Quote(v.String())))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
d.w.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
numEntries := v.Len()
keys := v.MapKeys()
if d.cs.SortKeys {
sortValues(keys, d.cs)
}
for i, key := range keys {
d.dump(d.unpackValue(key))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.MapIndex(key)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Struct:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
vt := v.Type()
numFields := v.NumField()
for i := 0; i < numFields; i++ {
d.indent()
vtf := vt.Field(i)
d.w.Write([]byte(vtf.Name))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.Field(i)))
if i < (numFields - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(d.w, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(d.w, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it in case any new
// types are added.
default:
if v.CanInterface() {
fmt.Fprintf(d.w, "%v", v.Interface())
} else {
fmt.Fprintf(d.w, "%v", v.String())
}
}
}
// fdump is a helper function to consolidate the logic from the various public
// methods which take varying writers and config states.
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
for _, arg := range a {
if arg == nil {
w.Write(interfaceBytes)
w.Write(spaceBytes)
w.Write(nilAngleBytes)
w.Write(newlineBytes)
continue
}
d := dumpState{w: w, cs: cs}
d.pointers = make(map[uintptr]int)
d.dump(reflect.ValueOf(arg))
d.w.Write(newlineBytes)
}
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func Fdump(w io.Writer, a ...interface{}) {
fdump(&Config, w, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(&Config, &buf, a...)
return buf.String()
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by an exported package global,
spew.Config. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func Dump(a ...interface{}) {
fdump(&Config, os.Stdout, a...)
}

419
vendor/github.com/davecgh/go-spew/spew/format.go generated vendored Normal file
View File

@@ -0,0 +1,419 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"reflect"
"strconv"
"strings"
)
// supportedFlags is a list of all the character flags supported by fmt package.
const supportedFlags = "0-+# "
// formatState implements the fmt.Formatter interface and contains information
// about the state of a formatting operation. The NewFormatter function can
// be used to get a new Formatter which can be used directly as arguments
// in standard fmt package printing calls.
type formatState struct {
value interface{}
fs fmt.State
depth int
pointers map[uintptr]int
ignoreNextType bool
cs *ConfigState
}
// buildDefaultFormat recreates the original format string without precision
// and width information to pass in to fmt.Sprintf in the case of an
// unrecognized type. Unless new types are added to the language, this
// function won't ever be called.
func (f *formatState) buildDefaultFormat() (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
buf.WriteRune('v')
format = buf.String()
return format
}
// constructOrigFormat recreates the original format string including precision
// and width information to pass along to the standard fmt package. This allows
// automatic deferral of all format strings this package doesn't support.
func (f *formatState) constructOrigFormat(verb rune) (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
if width, ok := f.fs.Width(); ok {
buf.WriteString(strconv.Itoa(width))
}
if precision, ok := f.fs.Precision(); ok {
buf.Write(precisionBytes)
buf.WriteString(strconv.Itoa(precision))
}
buf.WriteRune(verb)
format = buf.String()
return format
}
// unpackValue returns values inside of non-nil interfaces when possible and
// ensures that types for values which have been unpacked from an interface
// are displayed when the show types flag is also set.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface {
f.ignoreNextType = false
if !v.IsNil() {
v = v.Elem()
}
}
return v
}
// formatPtr handles formatting of pointers by indirecting them as necessary.
func (f *formatState) formatPtr(v reflect.Value) {
// Display nil if top level pointer is nil.
showTypes := f.fs.Flag('#')
if v.IsNil() && (!showTypes || f.ignoreNextType) {
f.fs.Write(nilAngleBytes)
return
}
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range f.pointers {
if depth >= f.depth {
delete(f.pointers, k)
}
}
// Keep list of all dereferenced pointers to possibly show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by derferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
cycleFound = true
indirects--
break
}
f.pointers[addr] = f.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type or indirection level depending on flags.
if showTypes && !f.ignoreNextType {
f.fs.Write(openParenBytes)
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
f.fs.Write([]byte(ve.Type().String()))
f.fs.Write(closeParenBytes)
} else {
if nilFound || cycleFound {
indirects += strings.Count(ve.Type().String(), "*")
}
f.fs.Write(openAngleBytes)
f.fs.Write([]byte(strings.Repeat("*", indirects)))
f.fs.Write(closeAngleBytes)
}
// Display pointer information depending on flags.
if f.fs.Flag('+') && (len(pointerChain) > 0) {
f.fs.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
f.fs.Write(pointerChainBytes)
}
printHexPtr(f.fs, addr)
}
f.fs.Write(closeParenBytes)
}
// Display dereferenced value.
switch {
case nilFound:
f.fs.Write(nilAngleBytes)
case cycleFound:
f.fs.Write(circularShortBytes)
default:
f.ignoreNextType = true
f.format(ve)
}
}
// format is the main workhorse for providing the Formatter interface. It
// uses the passed reflect value to figure out what kind of object we are
// dealing with and formats it appropriately. It is a recursive function,
// however circular data structures are detected and handled properly.
func (f *formatState) format(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
f.fs.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
f.formatPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !f.ignoreNextType && f.fs.Flag('#') {
f.fs.Write(openParenBytes)
f.fs.Write([]byte(v.Type().String()))
f.fs.Write(closeParenBytes)
}
f.ignoreNextType = false
// Call Stringer/error interfaces if they exist and the handle methods
// flag is enabled.
if !f.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(f.cs, f.fs, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(f.fs, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(f.fs, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(f.fs, v.Uint(), 10)
case reflect.Float32:
printFloat(f.fs, v.Float(), 32)
case reflect.Float64:
printFloat(f.fs, v.Float(), 64)
case reflect.Complex64:
printComplex(f.fs, v.Complex(), 32)
case reflect.Complex128:
printComplex(f.fs, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
f.fs.Write(openBracketBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
numEntries := v.Len()
for i := 0; i < numEntries; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(v.Index(i)))
}
}
f.depth--
f.fs.Write(closeBracketBytes)
case reflect.String:
f.fs.Write([]byte(v.String()))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
f.fs.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
f.fs.Write(openMapBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
keys := v.MapKeys()
if f.cs.SortKeys {
sortValues(keys, f.cs)
}
for i, key := range keys {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(key))
f.fs.Write(colonBytes)
f.ignoreNextType = true
f.format(f.unpackValue(v.MapIndex(key)))
}
}
f.depth--
f.fs.Write(closeMapBytes)
case reflect.Struct:
numFields := v.NumField()
f.fs.Write(openBraceBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
vt := v.Type()
for i := 0; i < numFields; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
vtf := vt.Field(i)
if f.fs.Flag('+') || f.fs.Flag('#') {
f.fs.Write([]byte(vtf.Name))
f.fs.Write(colonBytes)
}
f.format(f.unpackValue(v.Field(i)))
}
}
f.depth--
f.fs.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(f.fs, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(f.fs, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it if any get added.
default:
format := f.buildDefaultFormat()
if v.CanInterface() {
fmt.Fprintf(f.fs, format, v.Interface())
} else {
fmt.Fprintf(f.fs, format, v.String())
}
}
}
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
// details.
func (f *formatState) Format(fs fmt.State, verb rune) {
f.fs = fs
// Use standard formatting for verbs that are not v.
if verb != 'v' {
format := f.constructOrigFormat(verb)
fmt.Fprintf(fs, format, f.value)
return
}
if f.value == nil {
if fs.Flag('#') {
fs.Write(interfaceBytes)
}
fs.Write(nilAngleBytes)
return
}
f.format(reflect.ValueOf(f.value))
}
// newFormatter is a helper function to consolidate the logic from the various
// public methods which take varying config states.
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
fs := &formatState{value: v, cs: cs}
fs.pointers = make(map[uintptr]int)
return fs
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
Printf, Println, or Fprintf.
*/
func NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(&Config, v)
}

148
vendor/github.com/davecgh/go-spew/spew/spew.go generated vendored Normal file
View File

@@ -0,0 +1,148 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"fmt"
"io"
)
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the formatted string as a value that satisfies error. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a default Formatter interface returned by NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
func Print(a ...interface{}) (n int, err error) {
return fmt.Print(convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
func Println(a ...interface{}) (n int, err error) {
return fmt.Println(convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprint(a ...interface{}) string {
return fmt.Sprint(convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintln(a ...interface{}) string {
return fmt.Sprintln(convertArgs(a)...)
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a default spew Formatter interface.
func convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = NewFormatter(arg)
}
return formatters
}

25
vendor/github.com/gorilla/websocket/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,25 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
.idea/
*.iml

9
vendor/github.com/gorilla/websocket/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,9 @@
# This is the official list of Gorilla WebSocket authors for copyright
# purposes.
#
# Please keep the list sorted.
Gary Burd <gary@beagledreams.com>
Google LLC (https://opensource.google.com/)
Joachim Bauch <mail@joachim-bauch.de>

22
vendor/github.com/gorilla/websocket/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,22 @@
Copyright (c) 2013 The Gorilla WebSocket Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

39
vendor/github.com/gorilla/websocket/README.md generated vendored Normal file
View File

@@ -0,0 +1,39 @@
# Gorilla WebSocket
[![GoDoc](https://godoc.org/github.com/gorilla/websocket?status.svg)](https://godoc.org/github.com/gorilla/websocket)
[![CircleCI](https://circleci.com/gh/gorilla/websocket.svg?style=svg)](https://circleci.com/gh/gorilla/websocket)
Gorilla WebSocket is a [Go](http://golang.org/) implementation of the
[WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol.
---
⚠️ **[The Gorilla WebSocket Package is looking for a new maintainer](https://github.com/gorilla/websocket/issues/370)**
---
### Documentation
* [API Reference](https://pkg.go.dev/github.com/gorilla/websocket?tab=doc)
* [Chat example](https://github.com/gorilla/websocket/tree/master/examples/chat)
* [Command example](https://github.com/gorilla/websocket/tree/master/examples/command)
* [Client and server example](https://github.com/gorilla/websocket/tree/master/examples/echo)
* [File watch example](https://github.com/gorilla/websocket/tree/master/examples/filewatch)
### Status
The Gorilla WebSocket package provides a complete and tested implementation of
the [WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol. The
package API is stable.
### Installation
go get github.com/gorilla/websocket
### Protocol Compliance
The Gorilla WebSocket package passes the server tests in the [Autobahn Test
Suite](https://github.com/crossbario/autobahn-testsuite) using the application in the [examples/autobahn
subdirectory](https://github.com/gorilla/websocket/tree/master/examples/autobahn).

422
vendor/github.com/gorilla/websocket/client.go generated vendored Normal file
View File

@@ -0,0 +1,422 @@
// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"bytes"
"context"
"crypto/tls"
"errors"
"io"
"io/ioutil"
"net"
"net/http"
"net/http/httptrace"
"net/url"
"strings"
"time"
)
// ErrBadHandshake is returned when the server response to opening handshake is
// invalid.
var ErrBadHandshake = errors.New("websocket: bad handshake")
var errInvalidCompression = errors.New("websocket: invalid compression negotiation")
// NewClient creates a new client connection using the given net connection.
// The URL u specifies the host and request URI. Use requestHeader to specify
// the origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies
// (Cookie). Use the response.Header to get the selected subprotocol
// (Sec-WebSocket-Protocol) and cookies (Set-Cookie).
//
// If the WebSocket handshake fails, ErrBadHandshake is returned along with a
// non-nil *http.Response so that callers can handle redirects, authentication,
// etc.
//
// Deprecated: Use Dialer instead.
func NewClient(netConn net.Conn, u *url.URL, requestHeader http.Header, readBufSize, writeBufSize int) (c *Conn, response *http.Response, err error) {
d := Dialer{
ReadBufferSize: readBufSize,
WriteBufferSize: writeBufSize,
NetDial: func(net, addr string) (net.Conn, error) {
return netConn, nil
},
}
return d.Dial(u.String(), requestHeader)
}
// A Dialer contains options for connecting to WebSocket server.
//
// It is safe to call Dialer's methods concurrently.
type Dialer struct {
// NetDial specifies the dial function for creating TCP connections. If
// NetDial is nil, net.Dial is used.
NetDial func(network, addr string) (net.Conn, error)
// NetDialContext specifies the dial function for creating TCP connections. If
// NetDialContext is nil, NetDial is used.
NetDialContext func(ctx context.Context, network, addr string) (net.Conn, error)
// NetDialTLSContext specifies the dial function for creating TLS/TCP connections. If
// NetDialTLSContext is nil, NetDialContext is used.
// If NetDialTLSContext is set, Dial assumes the TLS handshake is done there and
// TLSClientConfig is ignored.
NetDialTLSContext func(ctx context.Context, network, addr string) (net.Conn, error)
// Proxy specifies a function to return a proxy for a given
// Request. If the function returns a non-nil error, the
// request is aborted with the provided error.
// If Proxy is nil or returns a nil *URL, no proxy is used.
Proxy func(*http.Request) (*url.URL, error)
// TLSClientConfig specifies the TLS configuration to use with tls.Client.
// If nil, the default configuration is used.
// If either NetDialTLS or NetDialTLSContext are set, Dial assumes the TLS handshake
// is done there and TLSClientConfig is ignored.
TLSClientConfig *tls.Config
// HandshakeTimeout specifies the duration for the handshake to complete.
HandshakeTimeout time.Duration
// ReadBufferSize and WriteBufferSize specify I/O buffer sizes in bytes. If a buffer
// size is zero, then a useful default size is used. The I/O buffer sizes
// do not limit the size of the messages that can be sent or received.
ReadBufferSize, WriteBufferSize int
// WriteBufferPool is a pool of buffers for write operations. If the value
// is not set, then write buffers are allocated to the connection for the
// lifetime of the connection.
//
// A pool is most useful when the application has a modest volume of writes
// across a large number of connections.
//
// Applications should use a single pool for each unique value of
// WriteBufferSize.
WriteBufferPool BufferPool
// Subprotocols specifies the client's requested subprotocols.
Subprotocols []string
// EnableCompression specifies if the client should attempt to negotiate
// per message compression (RFC 7692). Setting this value to true does not
// guarantee that compression will be supported. Currently only "no context
// takeover" modes are supported.
EnableCompression bool
// Jar specifies the cookie jar.
// If Jar is nil, cookies are not sent in requests and ignored
// in responses.
Jar http.CookieJar
}
// Dial creates a new client connection by calling DialContext with a background context.
func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) {
return d.DialContext(context.Background(), urlStr, requestHeader)
}
var errMalformedURL = errors.New("malformed ws or wss URL")
func hostPortNoPort(u *url.URL) (hostPort, hostNoPort string) {
hostPort = u.Host
hostNoPort = u.Host
if i := strings.LastIndex(u.Host, ":"); i > strings.LastIndex(u.Host, "]") {
hostNoPort = hostNoPort[:i]
} else {
switch u.Scheme {
case "wss":
hostPort += ":443"
case "https":
hostPort += ":443"
default:
hostPort += ":80"
}
}
return hostPort, hostNoPort
}
// DefaultDialer is a dialer with all fields set to the default values.
var DefaultDialer = &Dialer{
Proxy: http.ProxyFromEnvironment,
HandshakeTimeout: 45 * time.Second,
}
// nilDialer is dialer to use when receiver is nil.
var nilDialer = *DefaultDialer
// DialContext creates a new client connection. Use requestHeader to specify the
// origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies (Cookie).
// Use the response.Header to get the selected subprotocol
// (Sec-WebSocket-Protocol) and cookies (Set-Cookie).
//
// The context will be used in the request and in the Dialer.
//
// If the WebSocket handshake fails, ErrBadHandshake is returned along with a
// non-nil *http.Response so that callers can handle redirects, authentication,
// etcetera. The response body may not contain the entire response and does not
// need to be closed by the application.
func (d *Dialer) DialContext(ctx context.Context, urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) {
if d == nil {
d = &nilDialer
}
challengeKey, err := generateChallengeKey()
if err != nil {
return nil, nil, err
}
u, err := url.Parse(urlStr)
if err != nil {
return nil, nil, err
}
switch u.Scheme {
case "ws":
u.Scheme = "http"
case "wss":
u.Scheme = "https"
default:
return nil, nil, errMalformedURL
}
if u.User != nil {
// User name and password are not allowed in websocket URIs.
return nil, nil, errMalformedURL
}
req := &http.Request{
Method: http.MethodGet,
URL: u,
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
Header: make(http.Header),
Host: u.Host,
}
req = req.WithContext(ctx)
// Set the cookies present in the cookie jar of the dialer
if d.Jar != nil {
for _, cookie := range d.Jar.Cookies(u) {
req.AddCookie(cookie)
}
}
// Set the request headers using the capitalization for names and values in
// RFC examples. Although the capitalization shouldn't matter, there are
// servers that depend on it. The Header.Set method is not used because the
// method canonicalizes the header names.
req.Header["Upgrade"] = []string{"websocket"}
req.Header["Connection"] = []string{"Upgrade"}
req.Header["Sec-WebSocket-Key"] = []string{challengeKey}
req.Header["Sec-WebSocket-Version"] = []string{"13"}
if len(d.Subprotocols) > 0 {
req.Header["Sec-WebSocket-Protocol"] = []string{strings.Join(d.Subprotocols, ", ")}
}
for k, vs := range requestHeader {
switch {
case k == "Host":
if len(vs) > 0 {
req.Host = vs[0]
}
case k == "Upgrade" ||
k == "Connection" ||
k == "Sec-Websocket-Key" ||
k == "Sec-Websocket-Version" ||
k == "Sec-Websocket-Extensions" ||
(k == "Sec-Websocket-Protocol" && len(d.Subprotocols) > 0):
return nil, nil, errors.New("websocket: duplicate header not allowed: " + k)
case k == "Sec-Websocket-Protocol":
req.Header["Sec-WebSocket-Protocol"] = vs
default:
req.Header[k] = vs
}
}
if d.EnableCompression {
req.Header["Sec-WebSocket-Extensions"] = []string{"permessage-deflate; server_no_context_takeover; client_no_context_takeover"}
}
if d.HandshakeTimeout != 0 {
var cancel func()
ctx, cancel = context.WithTimeout(ctx, d.HandshakeTimeout)
defer cancel()
}
// Get network dial function.
var netDial func(network, add string) (net.Conn, error)
switch u.Scheme {
case "http":
if d.NetDialContext != nil {
netDial = func(network, addr string) (net.Conn, error) {
return d.NetDialContext(ctx, network, addr)
}
} else if d.NetDial != nil {
netDial = d.NetDial
}
case "https":
if d.NetDialTLSContext != nil {
netDial = func(network, addr string) (net.Conn, error) {
return d.NetDialTLSContext(ctx, network, addr)
}
} else if d.NetDialContext != nil {
netDial = func(network, addr string) (net.Conn, error) {
return d.NetDialContext(ctx, network, addr)
}
} else if d.NetDial != nil {
netDial = d.NetDial
}
default:
return nil, nil, errMalformedURL
}
if netDial == nil {
netDialer := &net.Dialer{}
netDial = func(network, addr string) (net.Conn, error) {
return netDialer.DialContext(ctx, network, addr)
}
}
// If needed, wrap the dial function to set the connection deadline.
if deadline, ok := ctx.Deadline(); ok {
forwardDial := netDial
netDial = func(network, addr string) (net.Conn, error) {
c, err := forwardDial(network, addr)
if err != nil {
return nil, err
}
err = c.SetDeadline(deadline)
if err != nil {
c.Close()
return nil, err
}
return c, nil
}
}
// If needed, wrap the dial function to connect through a proxy.
if d.Proxy != nil {
proxyURL, err := d.Proxy(req)
if err != nil {
return nil, nil, err
}
if proxyURL != nil {
dialer, err := proxy_FromURL(proxyURL, netDialerFunc(netDial))
if err != nil {
return nil, nil, err
}
netDial = dialer.Dial
}
}
hostPort, hostNoPort := hostPortNoPort(u)
trace := httptrace.ContextClientTrace(ctx)
if trace != nil && trace.GetConn != nil {
trace.GetConn(hostPort)
}
netConn, err := netDial("tcp", hostPort)
if trace != nil && trace.GotConn != nil {
trace.GotConn(httptrace.GotConnInfo{
Conn: netConn,
})
}
if err != nil {
return nil, nil, err
}
defer func() {
if netConn != nil {
netConn.Close()
}
}()
if u.Scheme == "https" && d.NetDialTLSContext == nil {
// If NetDialTLSContext is set, assume that the TLS handshake has already been done
cfg := cloneTLSConfig(d.TLSClientConfig)
if cfg.ServerName == "" {
cfg.ServerName = hostNoPort
}
tlsConn := tls.Client(netConn, cfg)
netConn = tlsConn
if trace != nil && trace.TLSHandshakeStart != nil {
trace.TLSHandshakeStart()
}
err := doHandshake(ctx, tlsConn, cfg)
if trace != nil && trace.TLSHandshakeDone != nil {
trace.TLSHandshakeDone(tlsConn.ConnectionState(), err)
}
if err != nil {
return nil, nil, err
}
}
conn := newConn(netConn, false, d.ReadBufferSize, d.WriteBufferSize, d.WriteBufferPool, nil, nil)
if err := req.Write(netConn); err != nil {
return nil, nil, err
}
if trace != nil && trace.GotFirstResponseByte != nil {
if peek, err := conn.br.Peek(1); err == nil && len(peek) == 1 {
trace.GotFirstResponseByte()
}
}
resp, err := http.ReadResponse(conn.br, req)
if err != nil {
return nil, nil, err
}
if d.Jar != nil {
if rc := resp.Cookies(); len(rc) > 0 {
d.Jar.SetCookies(u, rc)
}
}
if resp.StatusCode != 101 ||
!tokenListContainsValue(resp.Header, "Upgrade", "websocket") ||
!tokenListContainsValue(resp.Header, "Connection", "upgrade") ||
resp.Header.Get("Sec-Websocket-Accept") != computeAcceptKey(challengeKey) {
// Before closing the network connection on return from this
// function, slurp up some of the response to aid application
// debugging.
buf := make([]byte, 1024)
n, _ := io.ReadFull(resp.Body, buf)
resp.Body = ioutil.NopCloser(bytes.NewReader(buf[:n]))
return nil, resp, ErrBadHandshake
}
for _, ext := range parseExtensions(resp.Header) {
if ext[""] != "permessage-deflate" {
continue
}
_, snct := ext["server_no_context_takeover"]
_, cnct := ext["client_no_context_takeover"]
if !snct || !cnct {
return nil, resp, errInvalidCompression
}
conn.newCompressionWriter = compressNoContextTakeover
conn.newDecompressionReader = decompressNoContextTakeover
break
}
resp.Body = ioutil.NopCloser(bytes.NewReader([]byte{}))
conn.subprotocol = resp.Header.Get("Sec-Websocket-Protocol")
netConn.SetDeadline(time.Time{})
netConn = nil // to avoid close in defer.
return conn, resp, nil
}
func cloneTLSConfig(cfg *tls.Config) *tls.Config {
if cfg == nil {
return &tls.Config{}
}
return cfg.Clone()
}

148
vendor/github.com/gorilla/websocket/compression.go generated vendored Normal file
View File

@@ -0,0 +1,148 @@
// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"compress/flate"
"errors"
"io"
"strings"
"sync"
)
const (
minCompressionLevel = -2 // flate.HuffmanOnly not defined in Go < 1.6
maxCompressionLevel = flate.BestCompression
defaultCompressionLevel = 1
)
var (
flateWriterPools [maxCompressionLevel - minCompressionLevel + 1]sync.Pool
flateReaderPool = sync.Pool{New: func() interface{} {
return flate.NewReader(nil)
}}
)
func decompressNoContextTakeover(r io.Reader) io.ReadCloser {
const tail =
// Add four bytes as specified in RFC
"\x00\x00\xff\xff" +
// Add final block to squelch unexpected EOF error from flate reader.
"\x01\x00\x00\xff\xff"
fr, _ := flateReaderPool.Get().(io.ReadCloser)
fr.(flate.Resetter).Reset(io.MultiReader(r, strings.NewReader(tail)), nil)
return &flateReadWrapper{fr}
}
func isValidCompressionLevel(level int) bool {
return minCompressionLevel <= level && level <= maxCompressionLevel
}
func compressNoContextTakeover(w io.WriteCloser, level int) io.WriteCloser {
p := &flateWriterPools[level-minCompressionLevel]
tw := &truncWriter{w: w}
fw, _ := p.Get().(*flate.Writer)
if fw == nil {
fw, _ = flate.NewWriter(tw, level)
} else {
fw.Reset(tw)
}
return &flateWriteWrapper{fw: fw, tw: tw, p: p}
}
// truncWriter is an io.Writer that writes all but the last four bytes of the
// stream to another io.Writer.
type truncWriter struct {
w io.WriteCloser
n int
p [4]byte
}
func (w *truncWriter) Write(p []byte) (int, error) {
n := 0
// fill buffer first for simplicity.
if w.n < len(w.p) {
n = copy(w.p[w.n:], p)
p = p[n:]
w.n += n
if len(p) == 0 {
return n, nil
}
}
m := len(p)
if m > len(w.p) {
m = len(w.p)
}
if nn, err := w.w.Write(w.p[:m]); err != nil {
return n + nn, err
}
copy(w.p[:], w.p[m:])
copy(w.p[len(w.p)-m:], p[len(p)-m:])
nn, err := w.w.Write(p[:len(p)-m])
return n + nn, err
}
type flateWriteWrapper struct {
fw *flate.Writer
tw *truncWriter
p *sync.Pool
}
func (w *flateWriteWrapper) Write(p []byte) (int, error) {
if w.fw == nil {
return 0, errWriteClosed
}
return w.fw.Write(p)
}
func (w *flateWriteWrapper) Close() error {
if w.fw == nil {
return errWriteClosed
}
err1 := w.fw.Flush()
w.p.Put(w.fw)
w.fw = nil
if w.tw.p != [4]byte{0, 0, 0xff, 0xff} {
return errors.New("websocket: internal error, unexpected bytes at end of flate stream")
}
err2 := w.tw.w.Close()
if err1 != nil {
return err1
}
return err2
}
type flateReadWrapper struct {
fr io.ReadCloser
}
func (r *flateReadWrapper) Read(p []byte) (int, error) {
if r.fr == nil {
return 0, io.ErrClosedPipe
}
n, err := r.fr.Read(p)
if err == io.EOF {
// Preemptively place the reader back in the pool. This helps with
// scenarios where the application does not call NextReader() soon after
// this final read.
r.Close()
}
return n, err
}
func (r *flateReadWrapper) Close() error {
if r.fr == nil {
return io.ErrClosedPipe
}
err := r.fr.Close()
flateReaderPool.Put(r.fr)
r.fr = nil
return err
}

1230
vendor/github.com/gorilla/websocket/conn.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

227
vendor/github.com/gorilla/websocket/doc.go generated vendored Normal file
View File

@@ -0,0 +1,227 @@
// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package websocket implements the WebSocket protocol defined in RFC 6455.
//
// Overview
//
// The Conn type represents a WebSocket connection. A server application calls
// the Upgrader.Upgrade method from an HTTP request handler to get a *Conn:
//
// var upgrader = websocket.Upgrader{
// ReadBufferSize: 1024,
// WriteBufferSize: 1024,
// }
//
// func handler(w http.ResponseWriter, r *http.Request) {
// conn, err := upgrader.Upgrade(w, r, nil)
// if err != nil {
// log.Println(err)
// return
// }
// ... Use conn to send and receive messages.
// }
//
// Call the connection's WriteMessage and ReadMessage methods to send and
// receive messages as a slice of bytes. This snippet of code shows how to echo
// messages using these methods:
//
// for {
// messageType, p, err := conn.ReadMessage()
// if err != nil {
// log.Println(err)
// return
// }
// if err := conn.WriteMessage(messageType, p); err != nil {
// log.Println(err)
// return
// }
// }
//
// In above snippet of code, p is a []byte and messageType is an int with value
// websocket.BinaryMessage or websocket.TextMessage.
//
// An application can also send and receive messages using the io.WriteCloser
// and io.Reader interfaces. To send a message, call the connection NextWriter
// method to get an io.WriteCloser, write the message to the writer and close
// the writer when done. To receive a message, call the connection NextReader
// method to get an io.Reader and read until io.EOF is returned. This snippet
// shows how to echo messages using the NextWriter and NextReader methods:
//
// for {
// messageType, r, err := conn.NextReader()
// if err != nil {
// return
// }
// w, err := conn.NextWriter(messageType)
// if err != nil {
// return err
// }
// if _, err := io.Copy(w, r); err != nil {
// return err
// }
// if err := w.Close(); err != nil {
// return err
// }
// }
//
// Data Messages
//
// The WebSocket protocol distinguishes between text and binary data messages.
// Text messages are interpreted as UTF-8 encoded text. The interpretation of
// binary messages is left to the application.
//
// This package uses the TextMessage and BinaryMessage integer constants to
// identify the two data message types. The ReadMessage and NextReader methods
// return the type of the received message. The messageType argument to the
// WriteMessage and NextWriter methods specifies the type of a sent message.
//
// It is the application's responsibility to ensure that text messages are
// valid UTF-8 encoded text.
//
// Control Messages
//
// The WebSocket protocol defines three types of control messages: close, ping
// and pong. Call the connection WriteControl, WriteMessage or NextWriter
// methods to send a control message to the peer.
//
// Connections handle received close messages by calling the handler function
// set with the SetCloseHandler method and by returning a *CloseError from the
// NextReader, ReadMessage or the message Read method. The default close
// handler sends a close message to the peer.
//
// Connections handle received ping messages by calling the handler function
// set with the SetPingHandler method. The default ping handler sends a pong
// message to the peer.
//
// Connections handle received pong messages by calling the handler function
// set with the SetPongHandler method. The default pong handler does nothing.
// If an application sends ping messages, then the application should set a
// pong handler to receive the corresponding pong.
//
// The control message handler functions are called from the NextReader,
// ReadMessage and message reader Read methods. The default close and ping
// handlers can block these methods for a short time when the handler writes to
// the connection.
//
// The application must read the connection to process close, ping and pong
// messages sent from the peer. If the application is not otherwise interested
// in messages from the peer, then the application should start a goroutine to
// read and discard messages from the peer. A simple example is:
//
// func readLoop(c *websocket.Conn) {
// for {
// if _, _, err := c.NextReader(); err != nil {
// c.Close()
// break
// }
// }
// }
//
// Concurrency
//
// Connections support one concurrent reader and one concurrent writer.
//
// Applications are responsible for ensuring that no more than one goroutine
// calls the write methods (NextWriter, SetWriteDeadline, WriteMessage,
// WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and
// that no more than one goroutine calls the read methods (NextReader,
// SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler)
// concurrently.
//
// The Close and WriteControl methods can be called concurrently with all other
// methods.
//
// Origin Considerations
//
// Web browsers allow Javascript applications to open a WebSocket connection to
// any host. It's up to the server to enforce an origin policy using the Origin
// request header sent by the browser.
//
// The Upgrader calls the function specified in the CheckOrigin field to check
// the origin. If the CheckOrigin function returns false, then the Upgrade
// method fails the WebSocket handshake with HTTP status 403.
//
// If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail
// the handshake if the Origin request header is present and the Origin host is
// not equal to the Host request header.
//
// The deprecated package-level Upgrade function does not perform origin
// checking. The application is responsible for checking the Origin header
// before calling the Upgrade function.
//
// Buffers
//
// Connections buffer network input and output to reduce the number
// of system calls when reading or writing messages.
//
// Write buffers are also used for constructing WebSocket frames. See RFC 6455,
// Section 5 for a discussion of message framing. A WebSocket frame header is
// written to the network each time a write buffer is flushed to the network.
// Decreasing the size of the write buffer can increase the amount of framing
// overhead on the connection.
//
// The buffer sizes in bytes are specified by the ReadBufferSize and
// WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default
// size of 4096 when a buffer size field is set to zero. The Upgrader reuses
// buffers created by the HTTP server when a buffer size field is set to zero.
// The HTTP server buffers have a size of 4096 at the time of this writing.
//
// The buffer sizes do not limit the size of a message that can be read or
// written by a connection.
//
// Buffers are held for the lifetime of the connection by default. If the
// Dialer or Upgrader WriteBufferPool field is set, then a connection holds the
// write buffer only when writing a message.
//
// Applications should tune the buffer sizes to balance memory use and
// performance. Increasing the buffer size uses more memory, but can reduce the
// number of system calls to read or write the network. In the case of writing,
// increasing the buffer size can reduce the number of frame headers written to
// the network.
//
// Some guidelines for setting buffer parameters are:
//
// Limit the buffer sizes to the maximum expected message size. Buffers larger
// than the largest message do not provide any benefit.
//
// Depending on the distribution of message sizes, setting the buffer size to
// a value less than the maximum expected message size can greatly reduce memory
// use with a small impact on performance. Here's an example: If 99% of the
// messages are smaller than 256 bytes and the maximum message size is 512
// bytes, then a buffer size of 256 bytes will result in 1.01 more system calls
// than a buffer size of 512 bytes. The memory savings is 50%.
//
// A write buffer pool is useful when the application has a modest number
// writes over a large number of connections. when buffers are pooled, a larger
// buffer size has a reduced impact on total memory use and has the benefit of
// reducing system calls and frame overhead.
//
// Compression EXPERIMENTAL
//
// Per message compression extensions (RFC 7692) are experimentally supported
// by this package in a limited capacity. Setting the EnableCompression option
// to true in Dialer or Upgrader will attempt to negotiate per message deflate
// support.
//
// var upgrader = websocket.Upgrader{
// EnableCompression: true,
// }
//
// If compression was successfully negotiated with the connection's peer, any
// message received in compressed form will be automatically decompressed.
// All Read methods will return uncompressed bytes.
//
// Per message compression of messages written to a connection can be enabled
// or disabled by calling the corresponding Conn method:
//
// conn.EnableWriteCompression(false)
//
// Currently this package does not support compression with "context takeover".
// This means that messages must be compressed and decompressed in isolation,
// without retaining sliding window or dictionary state across messages. For
// more details refer to RFC 7692.
//
// Use of compression is experimental and may result in decreased performance.
package websocket

42
vendor/github.com/gorilla/websocket/join.go generated vendored Normal file
View File

@@ -0,0 +1,42 @@
// Copyright 2019 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"io"
"strings"
)
// JoinMessages concatenates received messages to create a single io.Reader.
// The string term is appended to each message. The returned reader does not
// support concurrent calls to the Read method.
func JoinMessages(c *Conn, term string) io.Reader {
return &joinReader{c: c, term: term}
}
type joinReader struct {
c *Conn
term string
r io.Reader
}
func (r *joinReader) Read(p []byte) (int, error) {
if r.r == nil {
var err error
_, r.r, err = r.c.NextReader()
if err != nil {
return 0, err
}
if r.term != "" {
r.r = io.MultiReader(r.r, strings.NewReader(r.term))
}
}
n, err := r.r.Read(p)
if err == io.EOF {
err = nil
r.r = nil
}
return n, err
}

60
vendor/github.com/gorilla/websocket/json.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"encoding/json"
"io"
)
// WriteJSON writes the JSON encoding of v as a message.
//
// Deprecated: Use c.WriteJSON instead.
func WriteJSON(c *Conn, v interface{}) error {
return c.WriteJSON(v)
}
// WriteJSON writes the JSON encoding of v as a message.
//
// See the documentation for encoding/json Marshal for details about the
// conversion of Go values to JSON.
func (c *Conn) WriteJSON(v interface{}) error {
w, err := c.NextWriter(TextMessage)
if err != nil {
return err
}
err1 := json.NewEncoder(w).Encode(v)
err2 := w.Close()
if err1 != nil {
return err1
}
return err2
}
// ReadJSON reads the next JSON-encoded message from the connection and stores
// it in the value pointed to by v.
//
// Deprecated: Use c.ReadJSON instead.
func ReadJSON(c *Conn, v interface{}) error {
return c.ReadJSON(v)
}
// ReadJSON reads the next JSON-encoded message from the connection and stores
// it in the value pointed to by v.
//
// See the documentation for the encoding/json Unmarshal function for details
// about the conversion of JSON to a Go value.
func (c *Conn) ReadJSON(v interface{}) error {
_, r, err := c.NextReader()
if err != nil {
return err
}
err = json.NewDecoder(r).Decode(v)
if err == io.EOF {
// One value is expected in the message.
err = io.ErrUnexpectedEOF
}
return err
}

55
vendor/github.com/gorilla/websocket/mask.go generated vendored Normal file
View File

@@ -0,0 +1,55 @@
// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. Use of
// this source code is governed by a BSD-style license that can be found in the
// LICENSE file.
//go:build !appengine
// +build !appengine
package websocket
import "unsafe"
const wordSize = int(unsafe.Sizeof(uintptr(0)))
func maskBytes(key [4]byte, pos int, b []byte) int {
// Mask one byte at a time for small buffers.
if len(b) < 2*wordSize {
for i := range b {
b[i] ^= key[pos&3]
pos++
}
return pos & 3
}
// Mask one byte at a time to word boundary.
if n := int(uintptr(unsafe.Pointer(&b[0]))) % wordSize; n != 0 {
n = wordSize - n
for i := range b[:n] {
b[i] ^= key[pos&3]
pos++
}
b = b[n:]
}
// Create aligned word size key.
var k [wordSize]byte
for i := range k {
k[i] = key[(pos+i)&3]
}
kw := *(*uintptr)(unsafe.Pointer(&k))
// Mask one word at a time.
n := (len(b) / wordSize) * wordSize
for i := 0; i < n; i += wordSize {
*(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&b[0])) + uintptr(i))) ^= kw
}
// Mask one byte at a time for remaining bytes.
b = b[n:]
for i := range b {
b[i] ^= key[pos&3]
pos++
}
return pos & 3
}

16
vendor/github.com/gorilla/websocket/mask_safe.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. Use of
// this source code is governed by a BSD-style license that can be found in the
// LICENSE file.
//go:build appengine
// +build appengine
package websocket
func maskBytes(key [4]byte, pos int, b []byte) int {
for i := range b {
b[i] ^= key[pos&3]
pos++
}
return pos & 3
}

102
vendor/github.com/gorilla/websocket/prepared.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"bytes"
"net"
"sync"
"time"
)
// PreparedMessage caches on the wire representations of a message payload.
// Use PreparedMessage to efficiently send a message payload to multiple
// connections. PreparedMessage is especially useful when compression is used
// because the CPU and memory expensive compression operation can be executed
// once for a given set of compression options.
type PreparedMessage struct {
messageType int
data []byte
mu sync.Mutex
frames map[prepareKey]*preparedFrame
}
// prepareKey defines a unique set of options to cache prepared frames in PreparedMessage.
type prepareKey struct {
isServer bool
compress bool
compressionLevel int
}
// preparedFrame contains data in wire representation.
type preparedFrame struct {
once sync.Once
data []byte
}
// NewPreparedMessage returns an initialized PreparedMessage. You can then send
// it to connection using WritePreparedMessage method. Valid wire
// representation will be calculated lazily only once for a set of current
// connection options.
func NewPreparedMessage(messageType int, data []byte) (*PreparedMessage, error) {
pm := &PreparedMessage{
messageType: messageType,
frames: make(map[prepareKey]*preparedFrame),
data: data,
}
// Prepare a plain server frame.
_, frameData, err := pm.frame(prepareKey{isServer: true, compress: false})
if err != nil {
return nil, err
}
// To protect against caller modifying the data argument, remember the data
// copied to the plain server frame.
pm.data = frameData[len(frameData)-len(data):]
return pm, nil
}
func (pm *PreparedMessage) frame(key prepareKey) (int, []byte, error) {
pm.mu.Lock()
frame, ok := pm.frames[key]
if !ok {
frame = &preparedFrame{}
pm.frames[key] = frame
}
pm.mu.Unlock()
var err error
frame.once.Do(func() {
// Prepare a frame using a 'fake' connection.
// TODO: Refactor code in conn.go to allow more direct construction of
// the frame.
mu := make(chan struct{}, 1)
mu <- struct{}{}
var nc prepareConn
c := &Conn{
conn: &nc,
mu: mu,
isServer: key.isServer,
compressionLevel: key.compressionLevel,
enableWriteCompression: true,
writeBuf: make([]byte, defaultWriteBufferSize+maxFrameHeaderSize),
}
if key.compress {
c.newCompressionWriter = compressNoContextTakeover
}
err = c.WriteMessage(pm.messageType, pm.data)
frame.data = nc.buf.Bytes()
})
return pm.messageType, frame.data, err
}
type prepareConn struct {
buf bytes.Buffer
net.Conn
}
func (pc *prepareConn) Write(p []byte) (int, error) { return pc.buf.Write(p) }
func (pc *prepareConn) SetWriteDeadline(t time.Time) error { return nil }

77
vendor/github.com/gorilla/websocket/proxy.go generated vendored Normal file
View File

@@ -0,0 +1,77 @@
// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"bufio"
"encoding/base64"
"errors"
"net"
"net/http"
"net/url"
"strings"
)
type netDialerFunc func(network, addr string) (net.Conn, error)
func (fn netDialerFunc) Dial(network, addr string) (net.Conn, error) {
return fn(network, addr)
}
func init() {
proxy_RegisterDialerType("http", func(proxyURL *url.URL, forwardDialer proxy_Dialer) (proxy_Dialer, error) {
return &httpProxyDialer{proxyURL: proxyURL, forwardDial: forwardDialer.Dial}, nil
})
}
type httpProxyDialer struct {
proxyURL *url.URL
forwardDial func(network, addr string) (net.Conn, error)
}
func (hpd *httpProxyDialer) Dial(network string, addr string) (net.Conn, error) {
hostPort, _ := hostPortNoPort(hpd.proxyURL)
conn, err := hpd.forwardDial(network, hostPort)
if err != nil {
return nil, err
}
connectHeader := make(http.Header)
if user := hpd.proxyURL.User; user != nil {
proxyUser := user.Username()
if proxyPassword, passwordSet := user.Password(); passwordSet {
credential := base64.StdEncoding.EncodeToString([]byte(proxyUser + ":" + proxyPassword))
connectHeader.Set("Proxy-Authorization", "Basic "+credential)
}
}
connectReq := &http.Request{
Method: http.MethodConnect,
URL: &url.URL{Opaque: addr},
Host: addr,
Header: connectHeader,
}
if err := connectReq.Write(conn); err != nil {
conn.Close()
return nil, err
}
// Read response. It's OK to use and discard buffered reader here becaue
// the remote server does not speak until spoken to.
br := bufio.NewReader(conn)
resp, err := http.ReadResponse(br, connectReq)
if err != nil {
conn.Close()
return nil, err
}
if resp.StatusCode != 200 {
conn.Close()
f := strings.SplitN(resp.Status, " ", 2)
return nil, errors.New(f[1])
}
return conn, nil
}

365
vendor/github.com/gorilla/websocket/server.go generated vendored Normal file
View File

@@ -0,0 +1,365 @@
// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"bufio"
"errors"
"io"
"net/http"
"net/url"
"strings"
"time"
)
// HandshakeError describes an error with the handshake from the peer.
type HandshakeError struct {
message string
}
func (e HandshakeError) Error() string { return e.message }
// Upgrader specifies parameters for upgrading an HTTP connection to a
// WebSocket connection.
//
// It is safe to call Upgrader's methods concurrently.
type Upgrader struct {
// HandshakeTimeout specifies the duration for the handshake to complete.
HandshakeTimeout time.Duration
// ReadBufferSize and WriteBufferSize specify I/O buffer sizes in bytes. If a buffer
// size is zero, then buffers allocated by the HTTP server are used. The
// I/O buffer sizes do not limit the size of the messages that can be sent
// or received.
ReadBufferSize, WriteBufferSize int
// WriteBufferPool is a pool of buffers for write operations. If the value
// is not set, then write buffers are allocated to the connection for the
// lifetime of the connection.
//
// A pool is most useful when the application has a modest volume of writes
// across a large number of connections.
//
// Applications should use a single pool for each unique value of
// WriteBufferSize.
WriteBufferPool BufferPool
// Subprotocols specifies the server's supported protocols in order of
// preference. If this field is not nil, then the Upgrade method negotiates a
// subprotocol by selecting the first match in this list with a protocol
// requested by the client. If there's no match, then no protocol is
// negotiated (the Sec-Websocket-Protocol header is not included in the
// handshake response).
Subprotocols []string
// Error specifies the function for generating HTTP error responses. If Error
// is nil, then http.Error is used to generate the HTTP response.
Error func(w http.ResponseWriter, r *http.Request, status int, reason error)
// CheckOrigin returns true if the request Origin header is acceptable. If
// CheckOrigin is nil, then a safe default is used: return false if the
// Origin request header is present and the origin host is not equal to
// request Host header.
//
// A CheckOrigin function should carefully validate the request origin to
// prevent cross-site request forgery.
CheckOrigin func(r *http.Request) bool
// EnableCompression specify if the server should attempt to negotiate per
// message compression (RFC 7692). Setting this value to true does not
// guarantee that compression will be supported. Currently only "no context
// takeover" modes are supported.
EnableCompression bool
}
func (u *Upgrader) returnError(w http.ResponseWriter, r *http.Request, status int, reason string) (*Conn, error) {
err := HandshakeError{reason}
if u.Error != nil {
u.Error(w, r, status, err)
} else {
w.Header().Set("Sec-Websocket-Version", "13")
http.Error(w, http.StatusText(status), status)
}
return nil, err
}
// checkSameOrigin returns true if the origin is not set or is equal to the request host.
func checkSameOrigin(r *http.Request) bool {
origin := r.Header["Origin"]
if len(origin) == 0 {
return true
}
u, err := url.Parse(origin[0])
if err != nil {
return false
}
return equalASCIIFold(u.Host, r.Host)
}
func (u *Upgrader) selectSubprotocol(r *http.Request, responseHeader http.Header) string {
if u.Subprotocols != nil {
clientProtocols := Subprotocols(r)
for _, serverProtocol := range u.Subprotocols {
for _, clientProtocol := range clientProtocols {
if clientProtocol == serverProtocol {
return clientProtocol
}
}
}
} else if responseHeader != nil {
return responseHeader.Get("Sec-Websocket-Protocol")
}
return ""
}
// Upgrade upgrades the HTTP server connection to the WebSocket protocol.
//
// The responseHeader is included in the response to the client's upgrade
// request. Use the responseHeader to specify cookies (Set-Cookie). To specify
// subprotocols supported by the server, set Upgrader.Subprotocols directly.
//
// If the upgrade fails, then Upgrade replies to the client with an HTTP error
// response.
func (u *Upgrader) Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header) (*Conn, error) {
const badHandshake = "websocket: the client is not using the websocket protocol: "
if !tokenListContainsValue(r.Header, "Connection", "upgrade") {
return u.returnError(w, r, http.StatusBadRequest, badHandshake+"'upgrade' token not found in 'Connection' header")
}
if !tokenListContainsValue(r.Header, "Upgrade", "websocket") {
return u.returnError(w, r, http.StatusBadRequest, badHandshake+"'websocket' token not found in 'Upgrade' header")
}
if r.Method != http.MethodGet {
return u.returnError(w, r, http.StatusMethodNotAllowed, badHandshake+"request method is not GET")
}
if !tokenListContainsValue(r.Header, "Sec-Websocket-Version", "13") {
return u.returnError(w, r, http.StatusBadRequest, "websocket: unsupported version: 13 not found in 'Sec-Websocket-Version' header")
}
if _, ok := responseHeader["Sec-Websocket-Extensions"]; ok {
return u.returnError(w, r, http.StatusInternalServerError, "websocket: application specific 'Sec-WebSocket-Extensions' headers are unsupported")
}
checkOrigin := u.CheckOrigin
if checkOrigin == nil {
checkOrigin = checkSameOrigin
}
if !checkOrigin(r) {
return u.returnError(w, r, http.StatusForbidden, "websocket: request origin not allowed by Upgrader.CheckOrigin")
}
challengeKey := r.Header.Get("Sec-Websocket-Key")
if challengeKey == "" {
return u.returnError(w, r, http.StatusBadRequest, "websocket: not a websocket handshake: 'Sec-WebSocket-Key' header is missing or blank")
}
subprotocol := u.selectSubprotocol(r, responseHeader)
// Negotiate PMCE
var compress bool
if u.EnableCompression {
for _, ext := range parseExtensions(r.Header) {
if ext[""] != "permessage-deflate" {
continue
}
compress = true
break
}
}
h, ok := w.(http.Hijacker)
if !ok {
return u.returnError(w, r, http.StatusInternalServerError, "websocket: response does not implement http.Hijacker")
}
var brw *bufio.ReadWriter
netConn, brw, err := h.Hijack()
if err != nil {
return u.returnError(w, r, http.StatusInternalServerError, err.Error())
}
if brw.Reader.Buffered() > 0 {
netConn.Close()
return nil, errors.New("websocket: client sent data before handshake is complete")
}
var br *bufio.Reader
if u.ReadBufferSize == 0 && bufioReaderSize(netConn, brw.Reader) > 256 {
// Reuse hijacked buffered reader as connection reader.
br = brw.Reader
}
buf := bufioWriterBuffer(netConn, brw.Writer)
var writeBuf []byte
if u.WriteBufferPool == nil && u.WriteBufferSize == 0 && len(buf) >= maxFrameHeaderSize+256 {
// Reuse hijacked write buffer as connection buffer.
writeBuf = buf
}
c := newConn(netConn, true, u.ReadBufferSize, u.WriteBufferSize, u.WriteBufferPool, br, writeBuf)
c.subprotocol = subprotocol
if compress {
c.newCompressionWriter = compressNoContextTakeover
c.newDecompressionReader = decompressNoContextTakeover
}
// Use larger of hijacked buffer and connection write buffer for header.
p := buf
if len(c.writeBuf) > len(p) {
p = c.writeBuf
}
p = p[:0]
p = append(p, "HTTP/1.1 101 Switching Protocols\r\nUpgrade: websocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: "...)
p = append(p, computeAcceptKey(challengeKey)...)
p = append(p, "\r\n"...)
if c.subprotocol != "" {
p = append(p, "Sec-WebSocket-Protocol: "...)
p = append(p, c.subprotocol...)
p = append(p, "\r\n"...)
}
if compress {
p = append(p, "Sec-WebSocket-Extensions: permessage-deflate; server_no_context_takeover; client_no_context_takeover\r\n"...)
}
for k, vs := range responseHeader {
if k == "Sec-Websocket-Protocol" {
continue
}
for _, v := range vs {
p = append(p, k...)
p = append(p, ": "...)
for i := 0; i < len(v); i++ {
b := v[i]
if b <= 31 {
// prevent response splitting.
b = ' '
}
p = append(p, b)
}
p = append(p, "\r\n"...)
}
}
p = append(p, "\r\n"...)
// Clear deadlines set by HTTP server.
netConn.SetDeadline(time.Time{})
if u.HandshakeTimeout > 0 {
netConn.SetWriteDeadline(time.Now().Add(u.HandshakeTimeout))
}
if _, err = netConn.Write(p); err != nil {
netConn.Close()
return nil, err
}
if u.HandshakeTimeout > 0 {
netConn.SetWriteDeadline(time.Time{})
}
return c, nil
}
// Upgrade upgrades the HTTP server connection to the WebSocket protocol.
//
// Deprecated: Use websocket.Upgrader instead.
//
// Upgrade does not perform origin checking. The application is responsible for
// checking the Origin header before calling Upgrade. An example implementation
// of the same origin policy check is:
//
// if req.Header.Get("Origin") != "http://"+req.Host {
// http.Error(w, "Origin not allowed", http.StatusForbidden)
// return
// }
//
// If the endpoint supports subprotocols, then the application is responsible
// for negotiating the protocol used on the connection. Use the Subprotocols()
// function to get the subprotocols requested by the client. Use the
// Sec-Websocket-Protocol response header to specify the subprotocol selected
// by the application.
//
// The responseHeader is included in the response to the client's upgrade
// request. Use the responseHeader to specify cookies (Set-Cookie) and the
// negotiated subprotocol (Sec-Websocket-Protocol).
//
// The connection buffers IO to the underlying network connection. The
// readBufSize and writeBufSize parameters specify the size of the buffers to
// use. Messages can be larger than the buffers.
//
// If the request is not a valid WebSocket handshake, then Upgrade returns an
// error of type HandshakeError. Applications should handle this error by
// replying to the client with an HTTP error response.
func Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header, readBufSize, writeBufSize int) (*Conn, error) {
u := Upgrader{ReadBufferSize: readBufSize, WriteBufferSize: writeBufSize}
u.Error = func(w http.ResponseWriter, r *http.Request, status int, reason error) {
// don't return errors to maintain backwards compatibility
}
u.CheckOrigin = func(r *http.Request) bool {
// allow all connections by default
return true
}
return u.Upgrade(w, r, responseHeader)
}
// Subprotocols returns the subprotocols requested by the client in the
// Sec-Websocket-Protocol header.
func Subprotocols(r *http.Request) []string {
h := strings.TrimSpace(r.Header.Get("Sec-Websocket-Protocol"))
if h == "" {
return nil
}
protocols := strings.Split(h, ",")
for i := range protocols {
protocols[i] = strings.TrimSpace(protocols[i])
}
return protocols
}
// IsWebSocketUpgrade returns true if the client requested upgrade to the
// WebSocket protocol.
func IsWebSocketUpgrade(r *http.Request) bool {
return tokenListContainsValue(r.Header, "Connection", "upgrade") &&
tokenListContainsValue(r.Header, "Upgrade", "websocket")
}
// bufioReaderSize size returns the size of a bufio.Reader.
func bufioReaderSize(originalReader io.Reader, br *bufio.Reader) int {
// This code assumes that peek on a reset reader returns
// bufio.Reader.buf[:0].
// TODO: Use bufio.Reader.Size() after Go 1.10
br.Reset(originalReader)
if p, err := br.Peek(0); err == nil {
return cap(p)
}
return 0
}
// writeHook is an io.Writer that records the last slice passed to it vio
// io.Writer.Write.
type writeHook struct {
p []byte
}
func (wh *writeHook) Write(p []byte) (int, error) {
wh.p = p
return len(p), nil
}
// bufioWriterBuffer grabs the buffer from a bufio.Writer.
func bufioWriterBuffer(originalWriter io.Writer, bw *bufio.Writer) []byte {
// This code assumes that bufio.Writer.buf[:1] is passed to the
// bufio.Writer's underlying writer.
var wh writeHook
bw.Reset(&wh)
bw.WriteByte(0)
bw.Flush()
bw.Reset(originalWriter)
return wh.p[:cap(wh.p)]
}

21
vendor/github.com/gorilla/websocket/tls_handshake.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
//go:build go1.17
// +build go1.17
package websocket
import (
"context"
"crypto/tls"
)
func doHandshake(ctx context.Context, tlsConn *tls.Conn, cfg *tls.Config) error {
if err := tlsConn.HandshakeContext(ctx); err != nil {
return err
}
if !cfg.InsecureSkipVerify {
if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,21 @@
//go:build !go1.17
// +build !go1.17
package websocket
import (
"context"
"crypto/tls"
)
func doHandshake(ctx context.Context, tlsConn *tls.Conn, cfg *tls.Config) error {
if err := tlsConn.Handshake(); err != nil {
return err
}
if !cfg.InsecureSkipVerify {
if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil {
return err
}
}
return nil
}

283
vendor/github.com/gorilla/websocket/util.go generated vendored Normal file
View File

@@ -0,0 +1,283 @@
// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package websocket
import (
"crypto/rand"
"crypto/sha1"
"encoding/base64"
"io"
"net/http"
"strings"
"unicode/utf8"
)
var keyGUID = []byte("258EAFA5-E914-47DA-95CA-C5AB0DC85B11")
func computeAcceptKey(challengeKey string) string {
h := sha1.New()
h.Write([]byte(challengeKey))
h.Write(keyGUID)
return base64.StdEncoding.EncodeToString(h.Sum(nil))
}
func generateChallengeKey() (string, error) {
p := make([]byte, 16)
if _, err := io.ReadFull(rand.Reader, p); err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(p), nil
}
// Token octets per RFC 2616.
var isTokenOctet = [256]bool{
'!': true,
'#': true,
'$': true,
'%': true,
'&': true,
'\'': true,
'*': true,
'+': true,
'-': true,
'.': true,
'0': true,
'1': true,
'2': true,
'3': true,
'4': true,
'5': true,
'6': true,
'7': true,
'8': true,
'9': true,
'A': true,
'B': true,
'C': true,
'D': true,
'E': true,
'F': true,
'G': true,
'H': true,
'I': true,
'J': true,
'K': true,
'L': true,
'M': true,
'N': true,
'O': true,
'P': true,
'Q': true,
'R': true,
'S': true,
'T': true,
'U': true,
'W': true,
'V': true,
'X': true,
'Y': true,
'Z': true,
'^': true,
'_': true,
'`': true,
'a': true,
'b': true,
'c': true,
'd': true,
'e': true,
'f': true,
'g': true,
'h': true,
'i': true,
'j': true,
'k': true,
'l': true,
'm': true,
'n': true,
'o': true,
'p': true,
'q': true,
'r': true,
's': true,
't': true,
'u': true,
'v': true,
'w': true,
'x': true,
'y': true,
'z': true,
'|': true,
'~': true,
}
// skipSpace returns a slice of the string s with all leading RFC 2616 linear
// whitespace removed.
func skipSpace(s string) (rest string) {
i := 0
for ; i < len(s); i++ {
if b := s[i]; b != ' ' && b != '\t' {
break
}
}
return s[i:]
}
// nextToken returns the leading RFC 2616 token of s and the string following
// the token.
func nextToken(s string) (token, rest string) {
i := 0
for ; i < len(s); i++ {
if !isTokenOctet[s[i]] {
break
}
}
return s[:i], s[i:]
}
// nextTokenOrQuoted returns the leading token or quoted string per RFC 2616
// and the string following the token or quoted string.
func nextTokenOrQuoted(s string) (value string, rest string) {
if !strings.HasPrefix(s, "\"") {
return nextToken(s)
}
s = s[1:]
for i := 0; i < len(s); i++ {
switch s[i] {
case '"':
return s[:i], s[i+1:]
case '\\':
p := make([]byte, len(s)-1)
j := copy(p, s[:i])
escape := true
for i = i + 1; i < len(s); i++ {
b := s[i]
switch {
case escape:
escape = false
p[j] = b
j++
case b == '\\':
escape = true
case b == '"':
return string(p[:j]), s[i+1:]
default:
p[j] = b
j++
}
}
return "", ""
}
}
return "", ""
}
// equalASCIIFold returns true if s is equal to t with ASCII case folding as
// defined in RFC 4790.
func equalASCIIFold(s, t string) bool {
for s != "" && t != "" {
sr, size := utf8.DecodeRuneInString(s)
s = s[size:]
tr, size := utf8.DecodeRuneInString(t)
t = t[size:]
if sr == tr {
continue
}
if 'A' <= sr && sr <= 'Z' {
sr = sr + 'a' - 'A'
}
if 'A' <= tr && tr <= 'Z' {
tr = tr + 'a' - 'A'
}
if sr != tr {
return false
}
}
return s == t
}
// tokenListContainsValue returns true if the 1#token header with the given
// name contains a token equal to value with ASCII case folding.
func tokenListContainsValue(header http.Header, name string, value string) bool {
headers:
for _, s := range header[name] {
for {
var t string
t, s = nextToken(skipSpace(s))
if t == "" {
continue headers
}
s = skipSpace(s)
if s != "" && s[0] != ',' {
continue headers
}
if equalASCIIFold(t, value) {
return true
}
if s == "" {
continue headers
}
s = s[1:]
}
}
return false
}
// parseExtensions parses WebSocket extensions from a header.
func parseExtensions(header http.Header) []map[string]string {
// From RFC 6455:
//
// Sec-WebSocket-Extensions = extension-list
// extension-list = 1#extension
// extension = extension-token *( ";" extension-param )
// extension-token = registered-token
// registered-token = token
// extension-param = token [ "=" (token | quoted-string) ]
// ;When using the quoted-string syntax variant, the value
// ;after quoted-string unescaping MUST conform to the
// ;'token' ABNF.
var result []map[string]string
headers:
for _, s := range header["Sec-Websocket-Extensions"] {
for {
var t string
t, s = nextToken(skipSpace(s))
if t == "" {
continue headers
}
ext := map[string]string{"": t}
for {
s = skipSpace(s)
if !strings.HasPrefix(s, ";") {
break
}
var k string
k, s = nextToken(skipSpace(s[1:]))
if k == "" {
continue headers
}
s = skipSpace(s)
var v string
if strings.HasPrefix(s, "=") {
v, s = nextTokenOrQuoted(skipSpace(s[1:]))
s = skipSpace(s)
}
if s != "" && s[0] != ',' && s[0] != ';' {
continue headers
}
ext[k] = v
}
if s != "" && s[0] != ',' {
continue headers
}
result = append(result, ext)
if s == "" {
continue headers
}
s = s[1:]
}
}
return result
}

473
vendor/github.com/gorilla/websocket/x_net_proxy.go generated vendored Normal file
View File

@@ -0,0 +1,473 @@
// Code generated by golang.org/x/tools/cmd/bundle. DO NOT EDIT.
//go:generate bundle -o x_net_proxy.go golang.org/x/net/proxy
// Package proxy provides support for a variety of protocols to proxy network
// data.
//
package websocket
import (
"errors"
"io"
"net"
"net/url"
"os"
"strconv"
"strings"
"sync"
)
type proxy_direct struct{}
// Direct is a direct proxy: one that makes network connections directly.
var proxy_Direct = proxy_direct{}
func (proxy_direct) Dial(network, addr string) (net.Conn, error) {
return net.Dial(network, addr)
}
// A PerHost directs connections to a default Dialer unless the host name
// requested matches one of a number of exceptions.
type proxy_PerHost struct {
def, bypass proxy_Dialer
bypassNetworks []*net.IPNet
bypassIPs []net.IP
bypassZones []string
bypassHosts []string
}
// NewPerHost returns a PerHost Dialer that directs connections to either
// defaultDialer or bypass, depending on whether the connection matches one of
// the configured rules.
func proxy_NewPerHost(defaultDialer, bypass proxy_Dialer) *proxy_PerHost {
return &proxy_PerHost{
def: defaultDialer,
bypass: bypass,
}
}
// Dial connects to the address addr on the given network through either
// defaultDialer or bypass.
func (p *proxy_PerHost) Dial(network, addr string) (c net.Conn, err error) {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
return p.dialerForRequest(host).Dial(network, addr)
}
func (p *proxy_PerHost) dialerForRequest(host string) proxy_Dialer {
if ip := net.ParseIP(host); ip != nil {
for _, net := range p.bypassNetworks {
if net.Contains(ip) {
return p.bypass
}
}
for _, bypassIP := range p.bypassIPs {
if bypassIP.Equal(ip) {
return p.bypass
}
}
return p.def
}
for _, zone := range p.bypassZones {
if strings.HasSuffix(host, zone) {
return p.bypass
}
if host == zone[1:] {
// For a zone ".example.com", we match "example.com"
// too.
return p.bypass
}
}
for _, bypassHost := range p.bypassHosts {
if bypassHost == host {
return p.bypass
}
}
return p.def
}
// AddFromString parses a string that contains comma-separated values
// specifying hosts that should use the bypass proxy. Each value is either an
// IP address, a CIDR range, a zone (*.example.com) or a host name
// (localhost). A best effort is made to parse the string and errors are
// ignored.
func (p *proxy_PerHost) AddFromString(s string) {
hosts := strings.Split(s, ",")
for _, host := range hosts {
host = strings.TrimSpace(host)
if len(host) == 0 {
continue
}
if strings.Contains(host, "/") {
// We assume that it's a CIDR address like 127.0.0.0/8
if _, net, err := net.ParseCIDR(host); err == nil {
p.AddNetwork(net)
}
continue
}
if ip := net.ParseIP(host); ip != nil {
p.AddIP(ip)
continue
}
if strings.HasPrefix(host, "*.") {
p.AddZone(host[1:])
continue
}
p.AddHost(host)
}
}
// AddIP specifies an IP address that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match an IP.
func (p *proxy_PerHost) AddIP(ip net.IP) {
p.bypassIPs = append(p.bypassIPs, ip)
}
// AddNetwork specifies an IP range that will use the bypass proxy. Note that
// this will only take effect if a literal IP address is dialed. A connection
// to a named host will never match.
func (p *proxy_PerHost) AddNetwork(net *net.IPNet) {
p.bypassNetworks = append(p.bypassNetworks, net)
}
// AddZone specifies a DNS suffix that will use the bypass proxy. A zone of
// "example.com" matches "example.com" and all of its subdomains.
func (p *proxy_PerHost) AddZone(zone string) {
if strings.HasSuffix(zone, ".") {
zone = zone[:len(zone)-1]
}
if !strings.HasPrefix(zone, ".") {
zone = "." + zone
}
p.bypassZones = append(p.bypassZones, zone)
}
// AddHost specifies a host name that will use the bypass proxy.
func (p *proxy_PerHost) AddHost(host string) {
if strings.HasSuffix(host, ".") {
host = host[:len(host)-1]
}
p.bypassHosts = append(p.bypassHosts, host)
}
// A Dialer is a means to establish a connection.
type proxy_Dialer interface {
// Dial connects to the given address via the proxy.
Dial(network, addr string) (c net.Conn, err error)
}
// Auth contains authentication parameters that specific Dialers may require.
type proxy_Auth struct {
User, Password string
}
// FromEnvironment returns the dialer specified by the proxy related variables in
// the environment.
func proxy_FromEnvironment() proxy_Dialer {
allProxy := proxy_allProxyEnv.Get()
if len(allProxy) == 0 {
return proxy_Direct
}
proxyURL, err := url.Parse(allProxy)
if err != nil {
return proxy_Direct
}
proxy, err := proxy_FromURL(proxyURL, proxy_Direct)
if err != nil {
return proxy_Direct
}
noProxy := proxy_noProxyEnv.Get()
if len(noProxy) == 0 {
return proxy
}
perHost := proxy_NewPerHost(proxy, proxy_Direct)
perHost.AddFromString(noProxy)
return perHost
}
// proxySchemes is a map from URL schemes to a function that creates a Dialer
// from a URL with such a scheme.
var proxy_proxySchemes map[string]func(*url.URL, proxy_Dialer) (proxy_Dialer, error)
// RegisterDialerType takes a URL scheme and a function to generate Dialers from
// a URL with that scheme and a forwarding Dialer. Registered schemes are used
// by FromURL.
func proxy_RegisterDialerType(scheme string, f func(*url.URL, proxy_Dialer) (proxy_Dialer, error)) {
if proxy_proxySchemes == nil {
proxy_proxySchemes = make(map[string]func(*url.URL, proxy_Dialer) (proxy_Dialer, error))
}
proxy_proxySchemes[scheme] = f
}
// FromURL returns a Dialer given a URL specification and an underlying
// Dialer for it to make network requests.
func proxy_FromURL(u *url.URL, forward proxy_Dialer) (proxy_Dialer, error) {
var auth *proxy_Auth
if u.User != nil {
auth = new(proxy_Auth)
auth.User = u.User.Username()
if p, ok := u.User.Password(); ok {
auth.Password = p
}
}
switch u.Scheme {
case "socks5":
return proxy_SOCKS5("tcp", u.Host, auth, forward)
}
// If the scheme doesn't match any of the built-in schemes, see if it
// was registered by another package.
if proxy_proxySchemes != nil {
if f, ok := proxy_proxySchemes[u.Scheme]; ok {
return f(u, forward)
}
}
return nil, errors.New("proxy: unknown scheme: " + u.Scheme)
}
var (
proxy_allProxyEnv = &proxy_envOnce{
names: []string{"ALL_PROXY", "all_proxy"},
}
proxy_noProxyEnv = &proxy_envOnce{
names: []string{"NO_PROXY", "no_proxy"},
}
)
// envOnce looks up an environment variable (optionally by multiple
// names) once. It mitigates expensive lookups on some platforms
// (e.g. Windows).
// (Borrowed from net/http/transport.go)
type proxy_envOnce struct {
names []string
once sync.Once
val string
}
func (e *proxy_envOnce) Get() string {
e.once.Do(e.init)
return e.val
}
func (e *proxy_envOnce) init() {
for _, n := range e.names {
e.val = os.Getenv(n)
if e.val != "" {
return
}
}
}
// SOCKS5 returns a Dialer that makes SOCKSv5 connections to the given address
// with an optional username and password. See RFC 1928 and RFC 1929.
func proxy_SOCKS5(network, addr string, auth *proxy_Auth, forward proxy_Dialer) (proxy_Dialer, error) {
s := &proxy_socks5{
network: network,
addr: addr,
forward: forward,
}
if auth != nil {
s.user = auth.User
s.password = auth.Password
}
return s, nil
}
type proxy_socks5 struct {
user, password string
network, addr string
forward proxy_Dialer
}
const proxy_socks5Version = 5
const (
proxy_socks5AuthNone = 0
proxy_socks5AuthPassword = 2
)
const proxy_socks5Connect = 1
const (
proxy_socks5IP4 = 1
proxy_socks5Domain = 3
proxy_socks5IP6 = 4
)
var proxy_socks5Errors = []string{
"",
"general failure",
"connection forbidden",
"network unreachable",
"host unreachable",
"connection refused",
"TTL expired",
"command not supported",
"address type not supported",
}
// Dial connects to the address addr on the given network via the SOCKS5 proxy.
func (s *proxy_socks5) Dial(network, addr string) (net.Conn, error) {
switch network {
case "tcp", "tcp6", "tcp4":
default:
return nil, errors.New("proxy: no support for SOCKS5 proxy connections of type " + network)
}
conn, err := s.forward.Dial(s.network, s.addr)
if err != nil {
return nil, err
}
if err := s.connect(conn, addr); err != nil {
conn.Close()
return nil, err
}
return conn, nil
}
// connect takes an existing connection to a socks5 proxy server,
// and commands the server to extend that connection to target,
// which must be a canonical address with a host and port.
func (s *proxy_socks5) connect(conn net.Conn, target string) error {
host, portStr, err := net.SplitHostPort(target)
if err != nil {
return err
}
port, err := strconv.Atoi(portStr)
if err != nil {
return errors.New("proxy: failed to parse port number: " + portStr)
}
if port < 1 || port > 0xffff {
return errors.New("proxy: port number out of range: " + portStr)
}
// the size here is just an estimate
buf := make([]byte, 0, 6+len(host))
buf = append(buf, proxy_socks5Version)
if len(s.user) > 0 && len(s.user) < 256 && len(s.password) < 256 {
buf = append(buf, 2 /* num auth methods */, proxy_socks5AuthNone, proxy_socks5AuthPassword)
} else {
buf = append(buf, 1 /* num auth methods */, proxy_socks5AuthNone)
}
if _, err := conn.Write(buf); err != nil {
return errors.New("proxy: failed to write greeting to SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
if _, err := io.ReadFull(conn, buf[:2]); err != nil {
return errors.New("proxy: failed to read greeting from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
if buf[0] != 5 {
return errors.New("proxy: SOCKS5 proxy at " + s.addr + " has unexpected version " + strconv.Itoa(int(buf[0])))
}
if buf[1] == 0xff {
return errors.New("proxy: SOCKS5 proxy at " + s.addr + " requires authentication")
}
// See RFC 1929
if buf[1] == proxy_socks5AuthPassword {
buf = buf[:0]
buf = append(buf, 1 /* password protocol version */)
buf = append(buf, uint8(len(s.user)))
buf = append(buf, s.user...)
buf = append(buf, uint8(len(s.password)))
buf = append(buf, s.password...)
if _, err := conn.Write(buf); err != nil {
return errors.New("proxy: failed to write authentication request to SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
if _, err := io.ReadFull(conn, buf[:2]); err != nil {
return errors.New("proxy: failed to read authentication reply from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
if buf[1] != 0 {
return errors.New("proxy: SOCKS5 proxy at " + s.addr + " rejected username/password")
}
}
buf = buf[:0]
buf = append(buf, proxy_socks5Version, proxy_socks5Connect, 0 /* reserved */)
if ip := net.ParseIP(host); ip != nil {
if ip4 := ip.To4(); ip4 != nil {
buf = append(buf, proxy_socks5IP4)
ip = ip4
} else {
buf = append(buf, proxy_socks5IP6)
}
buf = append(buf, ip...)
} else {
if len(host) > 255 {
return errors.New("proxy: destination host name too long: " + host)
}
buf = append(buf, proxy_socks5Domain)
buf = append(buf, byte(len(host)))
buf = append(buf, host...)
}
buf = append(buf, byte(port>>8), byte(port))
if _, err := conn.Write(buf); err != nil {
return errors.New("proxy: failed to write connect request to SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
if _, err := io.ReadFull(conn, buf[:4]); err != nil {
return errors.New("proxy: failed to read connect reply from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
failure := "unknown error"
if int(buf[1]) < len(proxy_socks5Errors) {
failure = proxy_socks5Errors[buf[1]]
}
if len(failure) > 0 {
return errors.New("proxy: SOCKS5 proxy at " + s.addr + " failed to connect: " + failure)
}
bytesToDiscard := 0
switch buf[3] {
case proxy_socks5IP4:
bytesToDiscard = net.IPv4len
case proxy_socks5IP6:
bytesToDiscard = net.IPv6len
case proxy_socks5Domain:
_, err := io.ReadFull(conn, buf[:1])
if err != nil {
return errors.New("proxy: failed to read domain length from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
bytesToDiscard = int(buf[0])
default:
return errors.New("proxy: got unknown address type " + strconv.Itoa(int(buf[3])) + " from SOCKS5 proxy at " + s.addr)
}
if cap(buf) < bytesToDiscard {
buf = make([]byte, bytesToDiscard)
} else {
buf = buf[:bytesToDiscard]
}
if _, err := io.ReadFull(conn, buf); err != nil {
return errors.New("proxy: failed to read address from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
// Also need to discard the port number
if _, err := io.ReadFull(conn, buf[:2]); err != nil {
return errors.New("proxy: failed to read port from SOCKS5 proxy at " + s.addr + ": " + err.Error())
}
return nil
}

20
vendor/github.com/jinzhu/copier/License generated vendored Normal file
View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2015 Jinzhu
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

132
vendor/github.com/jinzhu/copier/README.md generated vendored Normal file
View File

@@ -0,0 +1,132 @@
# Copier
I am a copier, I copy everything from one to another
[![test status](https://github.com/jinzhu/copier/workflows/tests/badge.svg?branch=master "test status")](https://github.com/jinzhu/copier/actions)
## Features
* Copy from field to field with same name
* Copy from method to field with same name
* Copy from field to method with same name
* Copy from slice to slice
* Copy from struct to slice
* Copy from map to map
* Enforce copying a field with a tag
* Ignore a field with a tag
* Deep Copy
## Usage
```go
package main
import (
"fmt"
"github.com/jinzhu/copier"
)
type User struct {
Name string
Role string
Age int32
EmployeCode int64 `copier:"EmployeNum"` // specify field name
// Explicitly ignored in the destination struct.
Salary int
}
func (user *User) DoubleAge() int32 {
return 2 * user.Age
}
// Tags in the destination Struct provide instructions to copier.Copy to ignore
// or enforce copying and to panic or return an error if a field was not copied.
type Employee struct {
// Tell copier.Copy to panic if this field is not copied.
Name string `copier:"must"`
// Tell copier.Copy to return an error if this field is not copied.
Age int32 `copier:"must,nopanic"`
// Tell copier.Copy to explicitly ignore copying this field.
Salary int `copier:"-"`
DoubleAge int32
EmployeId int64 `copier:"EmployeNum"` // specify field name
SuperRole string
}
func (employee *Employee) Role(role string) {
employee.SuperRole = "Super " + role
}
func main() {
var (
user = User{Name: "Jinzhu", Age: 18, Role: "Admin", Salary: 200000}
users = []User{{Name: "Jinzhu", Age: 18, Role: "Admin", Salary: 100000}, {Name: "jinzhu 2", Age: 30, Role: "Dev", Salary: 60000}}
employee = Employee{Salary: 150000}
employees = []Employee{}
)
copier.Copy(&employee, &user)
fmt.Printf("%#v \n", employee)
// Employee{
// Name: "Jinzhu", // Copy from field
// Age: 18, // Copy from field
// Salary:150000, // Copying explicitly ignored
// DoubleAge: 36, // Copy from method
// EmployeeId: 0, // Ignored
// SuperRole: "Super Admin", // Copy to method
// }
// Copy struct to slice
copier.Copy(&employees, &user)
fmt.Printf("%#v \n", employees)
// []Employee{
// {Name: "Jinzhu", Age: 18, Salary:0, DoubleAge: 36, EmployeId: 0, SuperRole: "Super Admin"}
// }
// Copy slice to slice
employees = []Employee{}
copier.Copy(&employees, &users)
fmt.Printf("%#v \n", employees)
// []Employee{
// {Name: "Jinzhu", Age: 18, Salary:0, DoubleAge: 36, EmployeId: 0, SuperRole: "Super Admin"},
// {Name: "jinzhu 2", Age: 30, Salary:0, DoubleAge: 60, EmployeId: 0, SuperRole: "Super Dev"},
// }
// Copy map to map
map1 := map[int]int{3: 6, 4: 8}
map2 := map[int32]int8{}
copier.Copy(&map2, map1)
fmt.Printf("%#v \n", map2)
// map[int32]int8{3:6, 4:8}
}
```
### Copy with Option
```go
copier.CopyWithOption(&to, &from, copier.Option{IgnoreEmpty: true, DeepCopy: true})
```
## Contributing
You can help to make the project better, check out [http://gorm.io/contribute.html](http://gorm.io/contribute.html) for things you can do.
# Author
**jinzhu**
* <http://github.com/jinzhu>
* <wosmvp@gmail.com>
* <http://twitter.com/zhangjinzhu>
## License
Released under the [MIT License](https://github.com/jinzhu/copier/blob/master/License).

697
vendor/github.com/jinzhu/copier/copier.go generated vendored Normal file
View File

@@ -0,0 +1,697 @@
package copier
import (
"database/sql"
"database/sql/driver"
"errors"
"fmt"
"reflect"
"strings"
"unicode"
)
// These flags define options for tag handling
const (
// Denotes that a destination field must be copied to. If copying fails then a panic will ensue.
tagMust uint8 = 1 << iota
// Denotes that the program should not panic when the must flag is on and
// value is not copied. The program will return an error instead.
tagNoPanic
// Ignore a destination field from being copied to.
tagIgnore
// Denotes that the value as been copied
hasCopied
// Some default converter types for a nicer syntax
String string = ""
Bool bool = false
Int int = 0
Float32 float32 = 0
Float64 float64 = 0
)
// Option sets copy options
type Option struct {
// setting this value to true will ignore copying zero values of all the fields, including bools, as well as a
// struct having all it's fields set to their zero values respectively (see IsZero() in reflect/value.go)
IgnoreEmpty bool
DeepCopy bool
Converters []TypeConverter
}
type TypeConverter struct {
SrcType interface{}
DstType interface{}
Fn func(src interface{}) (interface{}, error)
}
type converterPair struct {
SrcType reflect.Type
DstType reflect.Type
}
// Tag Flags
type flags struct {
BitFlags map[string]uint8
SrcNames tagNameMapping
DestNames tagNameMapping
}
// Field Tag name mapping
type tagNameMapping struct {
FieldNameToTag map[string]string
TagToFieldName map[string]string
}
// Copy copy things
func Copy(toValue interface{}, fromValue interface{}) (err error) {
return copier(toValue, fromValue, Option{})
}
// CopyWithOption copy with option
func CopyWithOption(toValue interface{}, fromValue interface{}, opt Option) (err error) {
return copier(toValue, fromValue, opt)
}
func copier(toValue interface{}, fromValue interface{}, opt Option) (err error) {
var (
isSlice bool
amount = 1
from = indirect(reflect.ValueOf(fromValue))
to = indirect(reflect.ValueOf(toValue))
converters map[converterPair]TypeConverter
)
// save convertes into map for faster lookup
for i := range opt.Converters {
if converters == nil {
converters = make(map[converterPair]TypeConverter)
}
pair := converterPair{
SrcType: reflect.TypeOf(opt.Converters[i].SrcType),
DstType: reflect.TypeOf(opt.Converters[i].DstType),
}
converters[pair] = opt.Converters[i]
}
if !to.CanAddr() {
return ErrInvalidCopyDestination
}
// Return is from value is invalid
if !from.IsValid() {
return ErrInvalidCopyFrom
}
fromType, isPtrFrom := indirectType(from.Type())
toType, _ := indirectType(to.Type())
if fromType.Kind() == reflect.Interface {
fromType = reflect.TypeOf(from.Interface())
}
if toType.Kind() == reflect.Interface {
toType, _ = indirectType(reflect.TypeOf(to.Interface()))
oldTo := to
to = reflect.New(reflect.TypeOf(to.Interface())).Elem()
defer func() {
oldTo.Set(to)
}()
}
// Just set it if possible to assign for normal types
if from.Kind() != reflect.Slice && from.Kind() != reflect.Struct && from.Kind() != reflect.Map && (from.Type().AssignableTo(to.Type()) || from.Type().ConvertibleTo(to.Type())) {
if !isPtrFrom || !opt.DeepCopy {
to.Set(from.Convert(to.Type()))
} else {
fromCopy := reflect.New(from.Type())
fromCopy.Set(from.Elem())
to.Set(fromCopy.Convert(to.Type()))
}
return
}
if from.Kind() != reflect.Slice && fromType.Kind() == reflect.Map && toType.Kind() == reflect.Map {
if !fromType.Key().ConvertibleTo(toType.Key()) {
return ErrMapKeyNotMatch
}
if to.IsNil() {
to.Set(reflect.MakeMapWithSize(toType, from.Len()))
}
for _, k := range from.MapKeys() {
toKey := indirect(reflect.New(toType.Key()))
if !set(toKey, k, opt.DeepCopy, converters) {
return fmt.Errorf("%w map, old key: %v, new key: %v", ErrNotSupported, k.Type(), toType.Key())
}
elemType := toType.Elem()
if elemType.Kind() != reflect.Slice {
elemType, _ = indirectType(elemType)
}
toValue := indirect(reflect.New(elemType))
if !set(toValue, from.MapIndex(k), opt.DeepCopy, converters) {
if err = copier(toValue.Addr().Interface(), from.MapIndex(k).Interface(), opt); err != nil {
return err
}
}
for {
if elemType == toType.Elem() {
to.SetMapIndex(toKey, toValue)
break
}
elemType = reflect.PtrTo(elemType)
toValue = toValue.Addr()
}
}
return
}
if from.Kind() == reflect.Slice && to.Kind() == reflect.Slice && fromType.ConvertibleTo(toType) {
if to.IsNil() {
slice := reflect.MakeSlice(reflect.SliceOf(to.Type().Elem()), from.Len(), from.Cap())
to.Set(slice)
}
for i := 0; i < from.Len(); i++ {
if to.Len() < i+1 {
to.Set(reflect.Append(to, reflect.New(to.Type().Elem()).Elem()))
}
if !set(to.Index(i), from.Index(i), opt.DeepCopy, converters) {
// ignore error while copy slice element
err = copier(to.Index(i).Addr().Interface(), from.Index(i).Interface(), opt)
if err != nil {
continue
}
}
}
return
}
if fromType.Kind() != reflect.Struct || toType.Kind() != reflect.Struct {
// skip not supported type
return
}
if from.Kind() == reflect.Slice || to.Kind() == reflect.Slice {
isSlice = true
if from.Kind() == reflect.Slice {
amount = from.Len()
}
}
for i := 0; i < amount; i++ {
var dest, source reflect.Value
if isSlice {
// source
if from.Kind() == reflect.Slice {
source = indirect(from.Index(i))
} else {
source = indirect(from)
}
// dest
dest = indirect(reflect.New(toType).Elem())
} else {
source = indirect(from)
dest = indirect(to)
}
destKind := dest.Kind()
initDest := false
if destKind == reflect.Interface {
initDest = true
dest = indirect(reflect.New(toType))
}
// Get tag options
flgs, err := getFlags(dest, source, toType, fromType)
if err != nil {
return err
}
// check source
if source.IsValid() {
copyUnexportedStructFields(dest, source)
// Copy from source field to dest field or method
fromTypeFields := deepFields(fromType)
for _, field := range fromTypeFields {
name := field.Name
// Get bit flags for field
fieldFlags, _ := flgs.BitFlags[name]
// Check if we should ignore copying
if (fieldFlags & tagIgnore) != 0 {
continue
}
srcFieldName, destFieldName := getFieldName(name, flgs)
if fromField := source.FieldByName(srcFieldName); fromField.IsValid() && !shouldIgnore(fromField, opt.IgnoreEmpty) {
// process for nested anonymous field
destFieldNotSet := false
if f, ok := dest.Type().FieldByName(destFieldName); ok {
for idx := range f.Index {
destField := dest.FieldByIndex(f.Index[:idx+1])
if destField.Kind() != reflect.Ptr {
continue
}
if !destField.IsNil() {
continue
}
if !destField.CanSet() {
destFieldNotSet = true
break
}
// destField is a nil pointer that can be set
newValue := reflect.New(destField.Type().Elem())
destField.Set(newValue)
}
}
if destFieldNotSet {
break
}
toField := dest.FieldByName(destFieldName)
if toField.IsValid() {
if toField.CanSet() {
if !set(toField, fromField, opt.DeepCopy, converters) {
if err := copier(toField.Addr().Interface(), fromField.Interface(), opt); err != nil {
return err
}
}
if fieldFlags != 0 {
// Note that a copy was made
flgs.BitFlags[name] = fieldFlags | hasCopied
}
}
} else {
// try to set to method
var toMethod reflect.Value
if dest.CanAddr() {
toMethod = dest.Addr().MethodByName(destFieldName)
} else {
toMethod = dest.MethodByName(destFieldName)
}
if toMethod.IsValid() && toMethod.Type().NumIn() == 1 && fromField.Type().AssignableTo(toMethod.Type().In(0)) {
toMethod.Call([]reflect.Value{fromField})
}
}
}
}
// Copy from from method to dest field
for _, field := range deepFields(toType) {
name := field.Name
srcFieldName, destFieldName := getFieldName(name, flgs)
var fromMethod reflect.Value
if source.CanAddr() {
fromMethod = source.Addr().MethodByName(srcFieldName)
} else {
fromMethod = source.MethodByName(srcFieldName)
}
if fromMethod.IsValid() && fromMethod.Type().NumIn() == 0 && fromMethod.Type().NumOut() == 1 && !shouldIgnore(fromMethod, opt.IgnoreEmpty) {
if toField := dest.FieldByName(destFieldName); toField.IsValid() && toField.CanSet() {
values := fromMethod.Call([]reflect.Value{})
if len(values) >= 1 {
set(toField, values[0], opt.DeepCopy, converters)
}
}
}
}
}
if isSlice && to.Kind() == reflect.Slice {
if dest.Addr().Type().AssignableTo(to.Type().Elem()) {
if to.Len() < i+1 {
to.Set(reflect.Append(to, dest.Addr()))
} else {
if !set(to.Index(i), dest.Addr(), opt.DeepCopy, converters) {
// ignore error while copy slice element
err = copier(to.Index(i).Addr().Interface(), dest.Addr().Interface(), opt)
if err != nil {
continue
}
}
}
} else if dest.Type().AssignableTo(to.Type().Elem()) {
if to.Len() < i+1 {
to.Set(reflect.Append(to, dest))
} else {
if !set(to.Index(i), dest, opt.DeepCopy, converters) {
// ignore error while copy slice element
err = copier(to.Index(i).Addr().Interface(), dest.Interface(), opt)
if err != nil {
continue
}
}
}
}
} else if initDest {
to.Set(dest)
}
err = checkBitFlags(flgs.BitFlags)
}
return
}
func copyUnexportedStructFields(to, from reflect.Value) {
if from.Kind() != reflect.Struct || to.Kind() != reflect.Struct || !from.Type().AssignableTo(to.Type()) {
return
}
// create a shallow copy of 'to' to get all fields
tmp := indirect(reflect.New(to.Type()))
tmp.Set(from)
// revert exported fields
for i := 0; i < to.NumField(); i++ {
if tmp.Field(i).CanSet() {
tmp.Field(i).Set(to.Field(i))
}
}
to.Set(tmp)
}
func shouldIgnore(v reflect.Value, ignoreEmpty bool) bool {
if !ignoreEmpty {
return false
}
return v.IsZero()
}
func deepFields(reflectType reflect.Type) []reflect.StructField {
if reflectType, _ = indirectType(reflectType); reflectType.Kind() == reflect.Struct {
fields := make([]reflect.StructField, 0, reflectType.NumField())
for i := 0; i < reflectType.NumField(); i++ {
v := reflectType.Field(i)
// PkgPath is the package path that qualifies a lower case (unexported)
// field name. It is empty for upper case (exported) field names.
// See https://golang.org/ref/spec#Uniqueness_of_identifiers
if v.PkgPath == "" {
fields = append(fields, v)
if v.Anonymous {
// also consider fields of anonymous fields as fields of the root
fields = append(fields, deepFields(v.Type)...)
}
}
}
return fields
}
return nil
}
func indirect(reflectValue reflect.Value) reflect.Value {
for reflectValue.Kind() == reflect.Ptr {
reflectValue = reflectValue.Elem()
}
return reflectValue
}
func indirectType(reflectType reflect.Type) (_ reflect.Type, isPtr bool) {
for reflectType.Kind() == reflect.Ptr || reflectType.Kind() == reflect.Slice {
reflectType = reflectType.Elem()
isPtr = true
}
return reflectType, isPtr
}
func set(to, from reflect.Value, deepCopy bool, converters map[converterPair]TypeConverter) bool {
if from.IsValid() {
if ok, err := lookupAndCopyWithConverter(to, from, converters); err != nil {
return false
} else if ok {
return true
}
if to.Kind() == reflect.Ptr {
// set `to` to nil if from is nil
if from.Kind() == reflect.Ptr && from.IsNil() {
to.Set(reflect.Zero(to.Type()))
return true
} else if to.IsNil() {
// `from` -> `to`
// sql.NullString -> *string
if fromValuer, ok := driverValuer(from); ok {
v, err := fromValuer.Value()
if err != nil {
return false
}
// if `from` is not valid do nothing with `to`
if v == nil {
return true
}
}
// allocate new `to` variable with default value (eg. *string -> new(string))
to.Set(reflect.New(to.Type().Elem()))
}
// depointer `to`
to = to.Elem()
}
if deepCopy {
toKind := to.Kind()
if toKind == reflect.Interface && to.IsNil() {
if reflect.TypeOf(from.Interface()) != nil {
to.Set(reflect.New(reflect.TypeOf(from.Interface())).Elem())
toKind = reflect.TypeOf(to.Interface()).Kind()
}
}
if from.Kind() == reflect.Ptr && from.IsNil() {
return true
}
if toKind == reflect.Struct || toKind == reflect.Map || toKind == reflect.Slice {
return false
}
}
if from.Type().ConvertibleTo(to.Type()) {
to.Set(from.Convert(to.Type()))
} else if toScanner, ok := to.Addr().Interface().(sql.Scanner); ok {
// `from` -> `to`
// *string -> sql.NullString
if from.Kind() == reflect.Ptr {
// if `from` is nil do nothing with `to`
if from.IsNil() {
return true
}
// depointer `from`
from = indirect(from)
}
// `from` -> `to`
// string -> sql.NullString
// set `to` by invoking method Scan(`from`)
err := toScanner.Scan(from.Interface())
if err != nil {
return false
}
} else if fromValuer, ok := driverValuer(from); ok {
// `from` -> `to`
// sql.NullString -> string
v, err := fromValuer.Value()
if err != nil {
return false
}
// if `from` is not valid do nothing with `to`
if v == nil {
return true
}
rv := reflect.ValueOf(v)
if rv.Type().AssignableTo(to.Type()) {
to.Set(rv)
}
} else if from.Kind() == reflect.Ptr {
return set(to, from.Elem(), deepCopy, converters)
} else {
return false
}
}
return true
}
// lookupAndCopyWithConverter looks up the type pair, on success the TypeConverter Fn func is called to copy src to dst field.
func lookupAndCopyWithConverter(to, from reflect.Value, converters map[converterPair]TypeConverter) (copied bool, err error) {
pair := converterPair{
SrcType: from.Type(),
DstType: to.Type(),
}
if cnv, ok := converters[pair]; ok {
result, err := cnv.Fn(from.Interface())
if err != nil {
return false, err
}
if result != nil {
to.Set(reflect.ValueOf(result))
} else {
// in case we've got a nil value to copy
to.Set(reflect.Zero(to.Type()))
}
return true, nil
}
return false, nil
}
// parseTags Parses struct tags and returns uint8 bit flags.
func parseTags(tag string) (flg uint8, name string, err error) {
for _, t := range strings.Split(tag, ",") {
switch t {
case "-":
flg = tagIgnore
return
case "must":
flg = flg | tagMust
case "nopanic":
flg = flg | tagNoPanic
default:
if unicode.IsUpper([]rune(t)[0]) {
name = strings.TrimSpace(t)
} else {
err = errors.New("copier field name tag must be start upper case")
}
}
}
return
}
// getTagFlags Parses struct tags for bit flags, field name.
func getFlags(dest, src reflect.Value, toType, fromType reflect.Type) (flags, error) {
flgs := flags{
BitFlags: map[string]uint8{},
SrcNames: tagNameMapping{
FieldNameToTag: map[string]string{},
TagToFieldName: map[string]string{},
},
DestNames: tagNameMapping{
FieldNameToTag: map[string]string{},
TagToFieldName: map[string]string{},
},
}
var toTypeFields, fromTypeFields []reflect.StructField
if dest.IsValid() {
toTypeFields = deepFields(toType)
}
if src.IsValid() {
fromTypeFields = deepFields(fromType)
}
// Get a list dest of tags
for _, field := range toTypeFields {
tags := field.Tag.Get("copier")
if tags != "" {
var name string
var err error
if flgs.BitFlags[field.Name], name, err = parseTags(tags); err != nil {
return flags{}, err
} else if name != "" {
flgs.DestNames.FieldNameToTag[field.Name] = name
flgs.DestNames.TagToFieldName[name] = field.Name
}
}
}
// Get a list source of tags
for _, field := range fromTypeFields {
tags := field.Tag.Get("copier")
if tags != "" {
var name string
var err error
if _, name, err = parseTags(tags); err != nil {
return flags{}, err
} else if name != "" {
flgs.SrcNames.FieldNameToTag[field.Name] = name
flgs.SrcNames.TagToFieldName[name] = field.Name
}
}
}
return flgs, nil
}
// checkBitFlags Checks flags for error or panic conditions.
func checkBitFlags(flagsList map[string]uint8) (err error) {
// Check flag conditions were met
for name, flgs := range flagsList {
if flgs&hasCopied == 0 {
switch {
case flgs&tagMust != 0 && flgs&tagNoPanic != 0:
err = fmt.Errorf("field %s has must tag but was not copied", name)
return
case flgs&(tagMust) != 0:
panic(fmt.Sprintf("Field %s has must tag but was not copied", name))
}
}
}
return
}
func getFieldName(fieldName string, flgs flags) (srcFieldName string, destFieldName string) {
// get dest field name
if srcTagName, ok := flgs.SrcNames.FieldNameToTag[fieldName]; ok {
destFieldName = srcTagName
if destTagName, ok := flgs.DestNames.TagToFieldName[srcTagName]; ok {
destFieldName = destTagName
}
} else {
if destTagName, ok := flgs.DestNames.TagToFieldName[fieldName]; ok {
destFieldName = destTagName
}
}
if destFieldName == "" {
destFieldName = fieldName
}
// get source field name
if destTagName, ok := flgs.DestNames.FieldNameToTag[fieldName]; ok {
srcFieldName = destTagName
if srcField, ok := flgs.SrcNames.TagToFieldName[destTagName]; ok {
srcFieldName = srcField
}
} else {
if srcField, ok := flgs.SrcNames.TagToFieldName[fieldName]; ok {
srcFieldName = srcField
}
}
if srcFieldName == "" {
srcFieldName = fieldName
}
return
}
func driverValuer(v reflect.Value) (i driver.Valuer, ok bool) {
if !v.CanAddr() {
i, ok = v.Interface().(driver.Valuer)
return
}
i, ok = v.Addr().Interface().(driver.Valuer)
return
}

10
vendor/github.com/jinzhu/copier/errors.go generated vendored Normal file
View File

@@ -0,0 +1,10 @@
package copier
import "errors"
var (
ErrInvalidCopyDestination = errors.New("copy destination is invalid")
ErrInvalidCopyFrom = errors.New("copy from is invalid")
ErrMapKeyNotMatch = errors.New("map's key type doesn't match")
ErrNotSupported = errors.New("not supported")
)

34
vendor/github.com/logrusorgru/aurora/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,34 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
*.out
# coverage
cover.html
# benchcmp
*.cmp

9
vendor/github.com/logrusorgru/aurora/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,9 @@
language: go
go:
- tip
before_install:
- go get github.com/axw/gocov/gocov
- go get github.com/mattn/goveralls
- go get golang.org/x/tools/cmd/cover
script:
- $HOME/gopath/bin/goveralls -service=travis-ci

8
vendor/github.com/logrusorgru/aurora/AUTHORS.md generated vendored Normal file
View File

@@ -0,0 +1,8 @@
AUTHORS
=======
- Konstantin Ivanov @logrusorgru
- Mattias Eriksson @snaggen
- Ousmane Traore @otraore
- Simon Legner @simon04
- Sevenate @sevenate

59
vendor/github.com/logrusorgru/aurora/CHANGELOG.md generated vendored Normal file
View File

@@ -0,0 +1,59 @@
Changes
=======
---
16:05:02
Thursday, July 2, 2020
Change license from the WTFPL to the Unlicense due to pkg.go.dev restriction.
---
15:39:40
Wednesday, April 17, 2019
- Bright background and foreground colors
- 8-bit indexed colors `Index`, `BgIndex`
- 24 grayscale colors `Gray`, `BgGray`
- `Yellow` and `BgYellow` methods, mark Brow and BgBrown as deprecated
Following specifications, correct name of the colors are yellow, but
by historical reason they are called brown. Both, the `Yellow` and the
`Brown` methods (including `Bg+`) represents the same colors. The Brown
are leaved for backward compatibility until Go modules production release.
- Additional formats
+ `Faint` that is opposite to the `Bold`
+ `DoublyUnderline`
+ `Fraktur`
+ `Italic`
+ `Underline`
+ `SlowBlink` with `Blink` alias
+ `RapidBlink`
+ `Reverse` that is alias for the `Inverse`
+ `Conceal` with `Hidden` alias
+ `CrossedOut` with `StrikeThrough` alias
+ `Framed`
+ `Encircled`
+ `Overlined`
- Add AUTHORS.md file and change all copyright notices.
- `Reset` method to create clear value. `Reset` method that replaces
`Bleach` method. The `Bleach` method was marked as deprecated.
---
14:25:49
Friday, August 18, 2017
- LICENSE.md changed to LICENSE
- fix email in README.md
- add "no warranty" to README.md
- set proper copyright date
---
16:59:28
Tuesday, November 8, 2016
- Rid out off sync.Pool
- Little optimizations (very little)
- Improved benchmarks
---

24
vendor/github.com/logrusorgru/aurora/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

314
vendor/github.com/logrusorgru/aurora/README.md generated vendored Normal file
View File

@@ -0,0 +1,314 @@
Aurora
======
[![go.dev reference](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white)](https://pkg.go.dev/github.com/logrusorgru/aurora?tab=doc)
[![Unlicense](https://img.shields.io/badge/license-unlicense-blue.svg)](http://unlicense.org/)
[![Build Status](https://travis-ci.org/logrusorgru/aurora.svg)](https://travis-ci.org/logrusorgru/aurora)
[![Coverage Status](https://coveralls.io/repos/logrusorgru/aurora/badge.svg?branch=master)](https://coveralls.io/r/logrusorgru/aurora?branch=master)
[![GoReportCard](https://goreportcard.com/badge/logrusorgru/aurora)](https://goreportcard.com/report/logrusorgru/aurora)
[![Gitter](https://img.shields.io/badge/chat-on_gitter-46bc99.svg?logo=data:image%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGhlaWdodD0iMTQiIHdpZHRoPSIxNCI%2BPGcgZmlsbD0iI2ZmZiI%2BPHJlY3QgeD0iMCIgeT0iMyIgd2lkdGg9IjEiIGhlaWdodD0iNSIvPjxyZWN0IHg9IjIiIHk9IjQiIHdpZHRoPSIxIiBoZWlnaHQ9IjciLz48cmVjdCB4PSI0IiB5PSI0IiB3aWR0aD0iMSIgaGVpZ2h0PSI3Ii8%2BPHJlY3QgeD0iNiIgeT0iNCIgd2lkdGg9IjEiIGhlaWdodD0iNCIvPjwvZz48L3N2Zz4%3D&logoWidth=10)](https://gitter.im/logrusorgru/aurora)
Ultimate ANSI colors for Golang. The package supports Printf/Sprintf etc.
![aurora logo](https://github.com/logrusorgru/aurora/blob/master/gopher_aurora.png)
# TOC
- [Installation](#installation)
- [Usage](#usage)
+ [Simple](#simple)
+ [Printf](#printf)
+ [aurora.Sprintf](#aurorasprintf)
+ [Enable/Disable colors](#enabledisable-colors)
- [Chains](#chains)
- [Colorize](#colorize)
- [Grayscale](#grayscale)
- [8-bit colors](#8-bit-colors)
- [Supported Colors & Formats](#supported-colors--formats)
+ [All colors](#all-colors)
+ [Standard and bright colors](#standard-and-bright-colors)
+ [Formats are likely supported](#formats-are-likely-supported)
+ [Formats are likely unsupported](#formats-are-likely-unsupported)
- [Limitations](#limitations)
+ [Windows](#windows)
+ [TTY](#tty)
- [Licensing](#licensing)
# Installation
Get
```
go get -u github.com/logrusorgru/aurora
```
Test
```
go test -cover github.com/logrusorgru/aurora
```
# Usage
### Simple
```go
package main
import (
"fmt"
. "github.com/logrusorgru/aurora"
)
func main() {
fmt.Println("Hello,", Magenta("Aurora"))
fmt.Println(Bold(Cyan("Cya!")))
}
```
![simple png](https://github.com/logrusorgru/aurora/blob/master/simple.png)
### Printf
```go
package main
import (
"fmt"
. "github.com/logrusorgru/aurora"
)
func main() {
fmt.Printf("Got it %d times\n", Green(1240))
fmt.Printf("PI is %+1.2e\n", Cyan(3.14))
}
```
![printf png](https://github.com/logrusorgru/aurora/blob/master/printf.png)
### aurora.Sprintf
```go
package main
import (
"fmt"
. "github.com/logrusorgru/aurora"
)
func main() {
fmt.Println(Sprintf(Magenta("Got it %d times"), Green(1240)))
}
```
![sprintf png](https://github.com/logrusorgru/aurora/blob/master/sprintf.png)
### Enable/Disable colors
```go
package main
import (
"fmt"
"flag"
"github.com/logrusorgru/aurora"
)
// colorizer
var au aurora.Aurora
var colors = flag.Bool("colors", false, "enable or disable colors")
func init() {
flag.Parse()
au = aurora.NewAurora(*colors)
}
func main() {
// use colorizer
fmt.Println(au.Green("Hello"))
}
```
Without flags:
![disable png](https://github.com/logrusorgru/aurora/blob/master/disable.png)
With `-colors` flag:
![enable png](https://github.com/logrusorgru/aurora/blob/master/enable.png)
# Chains
The following samples are equal
```go
x := BgMagenta(Bold(Red("x")))
```
```go
x := Red("x").Bold().BgMagenta()
```
The second is more readable
# Colorize
There is `Colorize` function that allows to choose some colors and
format from a side
```go
func getColors() Color {
// some stuff that returns appropriate colors and format
}
// [...]
func main() {
fmt.Println(Colorize("Greeting", getColors()))
}
```
Less complicated example
```go
x := Colorize("Greeting", GreenFg|GrayBg|BoldFm)
```
Unlike other color functions and methods (such as Red/BgBlue etc)
a `Colorize` clears previous colors
```go
x := Red("x").Colorize(BgGreen) // will be with green background only
```
# Grayscale
```go
fmt.Println(" ",
Gray(1-1, " 00-23 ").BgGray(24-1),
Gray(4-1, " 03-19 ").BgGray(20-1),
Gray(8-1, " 07-15 ").BgGray(16-1),
Gray(12-1, " 11-11 ").BgGray(12-1),
Gray(16-1, " 15-07 ").BgGray(8-1),
Gray(20-1, " 19-03 ").BgGray(4-1),
Gray(24-1, " 23-00 ").BgGray(1-1),
)
```
![grayscale png](https://github.com/logrusorgru/aurora/blob/master/aurora_grayscale.png)
# 8-bit colors
Methods `Index` and `BgIndex` implements 8-bit colors.
| Index/BgIndex | Meaning | Foreground | Background |
| -------------- | --------------- | ---------- | ---------- |
| 0- 7 | standard colors | 30- 37 | 40- 47 |
| 8- 15 | bright colors | 90- 97 | 100-107 |
| 16-231 | 216 colors | 38;5;n | 48;5;n |
| 232-255 | 24 grayscale | 38;5;n | 48;5;n |
# Supported colors & formats
- formats
+ bold (1)
+ faint (2)
+ doubly-underline (21)
+ fraktur (20)
+ italic (3)
+ underline (4)
+ slow blink (5)
+ rapid blink (6)
+ reverse video (7)
+ conceal (8)
+ crossed out (9)
+ framed (51)
+ encircled (52)
+ overlined (53)
- background and foreground colors, including bright
+ black
+ red
+ green
+ yellow (brown)
+ blue
+ magenta
+ cyan
+ white
+ 24 grayscale colors
+ 216 8-bit colors
### All colors
![linux png](https://github.com/logrusorgru/aurora/blob/master/aurora_colors_black.png)
![white png](https://github.com/logrusorgru/aurora/blob/master/aurora_colors_white.png)
### Standard and bright colors
![linux black standard png](https://github.com/logrusorgru/aurora/blob/master/aurora_black_standard.png)
![linux white standard png](https://github.com/logrusorgru/aurora/blob/master/aurora_white_standard.png)
### Formats are likely supported
![formats supported gif](https://github.com/logrusorgru/aurora/blob/master/aurora_formats.gif)
### Formats are likely unsupported
![formats rarely supported png](https://github.com/logrusorgru/aurora/blob/master/aurora_rarely_supported.png)
# Limitations
There is no way to represent `%T` and `%p` with colors using
a standard approach
```go
package main
import (
"fmt"
. "github.com/logrusorgru/aurora"
)
func main() {
r := Red("red")
var i int
fmt.Printf("%T %p\n", r, Green(&i))
}
```
Output will be without colors
```
aurora.value %!p(aurora.value={0xc42000a310 768 0})
```
The obvious workaround is `Red(fmt.Sprintf("%T", some))`
### Windows
The Aurora provides ANSI colors only, so there is no support for Windows. That said, there are workarounds available.
Check out these comments to learn more:
- [Using go-colorable](https://github.com/logrusorgru/aurora/issues/2#issuecomment-299014211).
- [Using registry for Windows 10](https://github.com/logrusorgru/aurora/issues/10#issue-476361247).
### TTY
The Aurora has no internal TTY detectors by design. Take a look
[this comment](https://github.com/logrusorgru/aurora/issues/2#issuecomment-299030108) if you want turn
on colors for a terminal only, and turn them off for a file.
### Licensing
Copyright &copy; 2016-2020 The Aurora Authors. This work is free.
It comes without any warranty, to the extent permitted by applicable
law. You can redistribute it and/or modify it under the terms of the
the Unlicense. See the LICENSE file for more details.

725
vendor/github.com/logrusorgru/aurora/aurora.go generated vendored Normal file
View File

@@ -0,0 +1,725 @@
//
// Copyright (c) 2016-2020 The Aurora Authors. All rights reserved.
// This program is free software. It comes without any warranty,
// to the extent permitted by applicable law. You can redistribute
// it and/or modify it under the terms of the Unlicense. See LICENSE
// file for more details or see below.
//
//
// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <http://unlicense.org/>
//
// Package aurora implements ANSI-colors
package aurora
import (
"fmt"
)
// An Aurora implements colorizer interface.
// It also can be a non-colorizer
type Aurora interface {
// Reset wraps given argument returning Value
// without formats and colors.
Reset(arg interface{}) Value
//
// Formats
//
//
// Bold or increased intensity (1).
Bold(arg interface{}) Value
// Faint, decreased intensity (2).
Faint(arg interface{}) Value
//
// DoublyUnderline or Bold off, double-underline
// per ECMA-48 (21).
DoublyUnderline(arg interface{}) Value
// Fraktur, rarely supported (20).
Fraktur(arg interface{}) Value
//
// Italic, not widely supported, sometimes
// treated as inverse (3).
Italic(arg interface{}) Value
// Underline (4).
Underline(arg interface{}) Value
//
// SlowBlink, blinking less than 150
// per minute (5).
SlowBlink(arg interface{}) Value
// RapidBlink, blinking 150+ per minute,
// not widely supported (6).
RapidBlink(arg interface{}) Value
// Blink is alias for the SlowBlink.
Blink(arg interface{}) Value
//
// Reverse video, swap foreground and
// background colors (7).
Reverse(arg interface{}) Value
// Inverse is alias for the Reverse
Inverse(arg interface{}) Value
//
// Conceal, hidden, not widely supported (8).
Conceal(arg interface{}) Value
// Hidden is alias for the Conceal
Hidden(arg interface{}) Value
//
// CrossedOut, characters legible, but
// marked for deletion (9).
CrossedOut(arg interface{}) Value
// StrikeThrough is alias for the CrossedOut.
StrikeThrough(arg interface{}) Value
//
// Framed (51).
Framed(arg interface{}) Value
// Encircled (52).
Encircled(arg interface{}) Value
//
// Overlined (53).
Overlined(arg interface{}) Value
//
// Foreground colors
//
//
// Black foreground color (30)
Black(arg interface{}) Value
// Red foreground color (31)
Red(arg interface{}) Value
// Green foreground color (32)
Green(arg interface{}) Value
// Yellow foreground color (33)
Yellow(arg interface{}) Value
// Brown foreground color (33)
//
// Deprecated: use Yellow instead, following specification
Brown(arg interface{}) Value
// Blue foreground color (34)
Blue(arg interface{}) Value
// Magenta foreground color (35)
Magenta(arg interface{}) Value
// Cyan foreground color (36)
Cyan(arg interface{}) Value
// White foreground color (37)
White(arg interface{}) Value
//
// Bright foreground colors
//
// BrightBlack foreground color (90)
BrightBlack(arg interface{}) Value
// BrightRed foreground color (91)
BrightRed(arg interface{}) Value
// BrightGreen foreground color (92)
BrightGreen(arg interface{}) Value
// BrightYellow foreground color (93)
BrightYellow(arg interface{}) Value
// BrightBlue foreground color (94)
BrightBlue(arg interface{}) Value
// BrightMagenta foreground color (95)
BrightMagenta(arg interface{}) Value
// BrightCyan foreground color (96)
BrightCyan(arg interface{}) Value
// BrightWhite foreground color (97)
BrightWhite(arg interface{}) Value
//
// Other
//
// Index of pre-defined 8-bit foreground color
// from 0 to 255 (38;5;n).
//
// 0- 7: standard colors (as in ESC [ 3037 m)
// 8- 15: high intensity colors (as in ESC [ 9097 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
Index(n uint8, arg interface{}) Value
// Gray from 0 to 23.
Gray(n uint8, arg interface{}) Value
//
// Background colors
//
//
// BgBlack background color (40)
BgBlack(arg interface{}) Value
// BgRed background color (41)
BgRed(arg interface{}) Value
// BgGreen background color (42)
BgGreen(arg interface{}) Value
// BgYellow background color (43)
BgYellow(arg interface{}) Value
// BgBrown background color (43)
//
// Deprecated: use BgYellow instead, following specification
BgBrown(arg interface{}) Value
// BgBlue background color (44)
BgBlue(arg interface{}) Value
// BgMagenta background color (45)
BgMagenta(arg interface{}) Value
// BgCyan background color (46)
BgCyan(arg interface{}) Value
// BgWhite background color (47)
BgWhite(arg interface{}) Value
//
// Bright background colors
//
// BgBrightBlack background color (100)
BgBrightBlack(arg interface{}) Value
// BgBrightRed background color (101)
BgBrightRed(arg interface{}) Value
// BgBrightGreen background color (102)
BgBrightGreen(arg interface{}) Value
// BgBrightYellow background color (103)
BgBrightYellow(arg interface{}) Value
// BgBrightBlue background color (104)
BgBrightBlue(arg interface{}) Value
// BgBrightMagenta background color (105)
BgBrightMagenta(arg interface{}) Value
// BgBrightCyan background color (106)
BgBrightCyan(arg interface{}) Value
// BgBrightWhite background color (107)
BgBrightWhite(arg interface{}) Value
//
// Other
//
// BgIndex of 8-bit pre-defined background color
// from 0 to 255 (48;5;n).
//
// 0- 7: standard colors (as in ESC [ 4047 m)
// 8- 15: high intensity colors (as in ESC [100107 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
BgIndex(n uint8, arg interface{}) Value
// BgGray from 0 to 23.
BgGray(n uint8, arg interface{}) Value
//
// Special
//
// Colorize removes existing colors and
// formats of the argument and applies given.
Colorize(arg interface{}, color Color) Value
//
// Support methods
//
// Sprintf allows to use colored format.
Sprintf(format interface{}, args ...interface{}) string
}
// NewAurora returns a new Aurora interface that
// will support or not support colors depending
// the enableColors argument
func NewAurora(enableColors bool) Aurora {
if enableColors {
return aurora{}
}
return auroraClear{}
}
// no colors
type auroraClear struct{}
func (auroraClear) Reset(arg interface{}) Value { return valueClear{arg} }
func (auroraClear) Bold(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Faint(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) DoublyUnderline(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Fraktur(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Italic(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Underline(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) SlowBlink(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) RapidBlink(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Blink(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Reverse(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Inverse(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Conceal(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Hidden(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) CrossedOut(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) StrikeThrough(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Framed(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Encircled(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Overlined(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Black(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Red(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Green(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Yellow(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Brown(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Blue(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Magenta(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Cyan(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) White(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightBlack(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightRed(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightGreen(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightYellow(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightBlue(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightMagenta(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightCyan(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BrightWhite(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Index(_ uint8, arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Gray(_ uint8, arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBlack(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgRed(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgGreen(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgYellow(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrown(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBlue(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgMagenta(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgCyan(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgWhite(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightBlack(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightRed(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightGreen(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightYellow(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightBlue(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightMagenta(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightCyan(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgBrightWhite(arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgIndex(_ uint8, arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) BgGray(_ uint8, arg interface{}) Value {
return valueClear{arg}
}
func (auroraClear) Colorize(arg interface{}, _ Color) Value {
return valueClear{arg}
}
func (auroraClear) Sprintf(format interface{}, args ...interface{}) string {
if str, ok := format.(string); ok {
return fmt.Sprintf(str, args...)
}
return fmt.Sprintf(fmt.Sprint(format), args...)
}
// colorized
type aurora struct{}
func (aurora) Reset(arg interface{}) Value {
return Reset(arg)
}
func (aurora) Bold(arg interface{}) Value {
return Bold(arg)
}
func (aurora) Faint(arg interface{}) Value {
return Faint(arg)
}
func (aurora) DoublyUnderline(arg interface{}) Value {
return DoublyUnderline(arg)
}
func (aurora) Fraktur(arg interface{}) Value {
return Fraktur(arg)
}
func (aurora) Italic(arg interface{}) Value {
return Italic(arg)
}
func (aurora) Underline(arg interface{}) Value {
return Underline(arg)
}
func (aurora) SlowBlink(arg interface{}) Value {
return SlowBlink(arg)
}
func (aurora) RapidBlink(arg interface{}) Value {
return RapidBlink(arg)
}
func (aurora) Blink(arg interface{}) Value {
return Blink(arg)
}
func (aurora) Reverse(arg interface{}) Value {
return Reverse(arg)
}
func (aurora) Inverse(arg interface{}) Value {
return Inverse(arg)
}
func (aurora) Conceal(arg interface{}) Value {
return Conceal(arg)
}
func (aurora) Hidden(arg interface{}) Value {
return Hidden(arg)
}
func (aurora) CrossedOut(arg interface{}) Value {
return CrossedOut(arg)
}
func (aurora) StrikeThrough(arg interface{}) Value {
return StrikeThrough(arg)
}
func (aurora) Framed(arg interface{}) Value {
return Framed(arg)
}
func (aurora) Encircled(arg interface{}) Value {
return Encircled(arg)
}
func (aurora) Overlined(arg interface{}) Value {
return Overlined(arg)
}
func (aurora) Black(arg interface{}) Value {
return Black(arg)
}
func (aurora) Red(arg interface{}) Value {
return Red(arg)
}
func (aurora) Green(arg interface{}) Value {
return Green(arg)
}
func (aurora) Yellow(arg interface{}) Value {
return Yellow(arg)
}
func (aurora) Brown(arg interface{}) Value {
return Brown(arg)
}
func (aurora) Blue(arg interface{}) Value {
return Blue(arg)
}
func (aurora) Magenta(arg interface{}) Value {
return Magenta(arg)
}
func (aurora) Cyan(arg interface{}) Value {
return Cyan(arg)
}
func (aurora) White(arg interface{}) Value {
return White(arg)
}
func (aurora) BrightBlack(arg interface{}) Value {
return BrightBlack(arg)
}
func (aurora) BrightRed(arg interface{}) Value {
return BrightRed(arg)
}
func (aurora) BrightGreen(arg interface{}) Value {
return BrightGreen(arg)
}
func (aurora) BrightYellow(arg interface{}) Value {
return BrightYellow(arg)
}
func (aurora) BrightBlue(arg interface{}) Value {
return BrightBlue(arg)
}
func (aurora) BrightMagenta(arg interface{}) Value {
return BrightMagenta(arg)
}
func (aurora) BrightCyan(arg interface{}) Value {
return BrightCyan(arg)
}
func (aurora) BrightWhite(arg interface{}) Value {
return BrightWhite(arg)
}
func (aurora) Index(index uint8, arg interface{}) Value {
return Index(index, arg)
}
func (aurora) Gray(n uint8, arg interface{}) Value {
return Gray(n, arg)
}
func (aurora) BgBlack(arg interface{}) Value {
return BgBlack(arg)
}
func (aurora) BgRed(arg interface{}) Value {
return BgRed(arg)
}
func (aurora) BgGreen(arg interface{}) Value {
return BgGreen(arg)
}
func (aurora) BgYellow(arg interface{}) Value {
return BgYellow(arg)
}
func (aurora) BgBrown(arg interface{}) Value {
return BgBrown(arg)
}
func (aurora) BgBlue(arg interface{}) Value {
return BgBlue(arg)
}
func (aurora) BgMagenta(arg interface{}) Value {
return BgMagenta(arg)
}
func (aurora) BgCyan(arg interface{}) Value {
return BgCyan(arg)
}
func (aurora) BgWhite(arg interface{}) Value {
return BgWhite(arg)
}
func (aurora) BgBrightBlack(arg interface{}) Value {
return BgBrightBlack(arg)
}
func (aurora) BgBrightRed(arg interface{}) Value {
return BgBrightRed(arg)
}
func (aurora) BgBrightGreen(arg interface{}) Value {
return BgBrightGreen(arg)
}
func (aurora) BgBrightYellow(arg interface{}) Value {
return BgBrightYellow(arg)
}
func (aurora) BgBrightBlue(arg interface{}) Value {
return BgBrightBlue(arg)
}
func (aurora) BgBrightMagenta(arg interface{}) Value {
return BgBrightMagenta(arg)
}
func (aurora) BgBrightCyan(arg interface{}) Value {
return BgBrightCyan(arg)
}
func (aurora) BgBrightWhite(arg interface{}) Value {
return BgBrightWhite(arg)
}
func (aurora) BgIndex(n uint8, arg interface{}) Value {
return BgIndex(n, arg)
}
func (aurora) BgGray(n uint8, arg interface{}) Value {
return BgGray(n, arg)
}
func (aurora) Colorize(arg interface{}, color Color) Value {
return Colorize(arg, color)
}
func (aurora) Sprintf(format interface{}, args ...interface{}) string {
return Sprintf(format, args...)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

BIN
vendor/github.com/logrusorgru/aurora/aurora_formats.gif generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

398
vendor/github.com/logrusorgru/aurora/color.go generated vendored Normal file
View File

@@ -0,0 +1,398 @@
//
// Copyright (c) 2016-2020 The Aurora Authors. All rights reserved.
// This program is free software. It comes without any warranty,
// to the extent permitted by applicable law. You can redistribute
// it and/or modify it under the terms of the Unlicense. See LICENSE
// file for more details or see below.
//
//
// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <http://unlicense.org/>
//
package aurora
// A Color type is a color. It can contain
// one background color, one foreground color
// and a format, including ideogram related
// formats.
type Color uint
/*
Developer note.
The int type is architecture depended and can be
represented as int32 or int64.
Thus, we can use 32-bits only to be fast and
cross-platform.
All supported formats requires 14 bits. It is
first 14 bits.
A foreground color requires 8 bit + 1 bit (presence flag).
And the same for background color.
The Color representations
[ bg 8 bit ] [fg 8 bit ] [ fg/bg 2 bits ] [ fm 14 bits ]
https://play.golang.org/p/fq2zcNstFoF
*/
// Special formats
const (
BoldFm Color = 1 << iota // 1
FaintFm // 2
ItalicFm // 3
UnderlineFm // 4
SlowBlinkFm // 5
RapidBlinkFm // 6
ReverseFm // 7
ConcealFm // 8
CrossedOutFm // 9
FrakturFm // 20
DoublyUnderlineFm // 21 or bold off for some systems
FramedFm // 51
EncircledFm // 52
OverlinedFm // 53
InverseFm = ReverseFm // alias to ReverseFm
BlinkFm = SlowBlinkFm // alias to SlowBlinkFm
HiddenFm = ConcealFm // alias to ConcealFm
StrikeThroughFm = CrossedOutFm // alias to CrossedOutFm
maskFm = BoldFm | FaintFm |
ItalicFm | UnderlineFm |
SlowBlinkFm | RapidBlinkFm |
ReverseFm |
ConcealFm | CrossedOutFm |
FrakturFm | DoublyUnderlineFm |
FramedFm | EncircledFm | OverlinedFm
flagFg Color = 1 << 14 // presence flag (14th bit)
flagBg Color = 1 << 15 // presence flag (15th bit)
shiftFg = 16 // shift for foreground (starting from 16th bit)
shiftBg = 24 // shift for background (starting from 24th bit)
)
// Foreground colors and related formats
const (
// 8 bits
// [ 0; 7] - 30-37
// [ 8; 15] - 90-97
// [ 16; 231] - RGB
// [232; 255] - grayscale
BlackFg Color = (iota << shiftFg) | flagFg // 30, 90
RedFg // 31, 91
GreenFg // 32, 92
YellowFg // 33, 93
BlueFg // 34, 94
MagentaFg // 35, 95
CyanFg // 36, 96
WhiteFg // 37, 97
BrightFg Color = ((1 << 3) << shiftFg) | flagFg // -> 90
// the BrightFg itself doesn't represent
// a color, thus it has not flagFg
// 5 bits
// BrownFg represents brown foreground color.
//
// Deprecated: use YellowFg instead, following specifications
BrownFg = YellowFg
//
maskFg = (0xff << shiftFg) | flagFg
)
// Background colors and related formats
const (
// 8 bits
// [ 0; 7] - 40-47
// [ 8; 15] - 100-107
// [ 16; 231] - RGB
// [232; 255] - grayscale
BlackBg Color = (iota << shiftBg) | flagBg // 40, 100
RedBg // 41, 101
GreenBg // 42, 102
YellowBg // 43, 103
BlueBg // 44, 104
MagentaBg // 45, 105
CyanBg // 46, 106
WhiteBg // 47, 107
BrightBg Color = ((1 << 3) << shiftBg) | flagBg // -> 100
// the BrightBg itself doesn't represent
// a color, thus it has not flagBg
// 5 bits
// BrownBg represents brown foreground color.
//
// Deprecated: use YellowBg instead, following specifications
BrownBg = YellowBg
//
maskBg = (0xff << shiftBg) | flagBg
)
const (
availFlags = "-+# 0"
esc = "\033["
clear = esc + "0m"
)
// IsValid returns true always
//
// Deprecated: don't use this method anymore
func (c Color) IsValid() bool {
return true
}
// Nos returns string like 1;7;31;45. It
// may be an empty string for empty color.
// If the zero is true, then the string
// is prepended with 0;
func (c Color) Nos(zero bool) string {
return string(c.appendNos(make([]byte, 0, 59), zero))
}
func appendCond(bs []byte, cond, semi bool, vals ...byte) []byte {
if !cond {
return bs
}
return appendSemi(bs, semi, vals...)
}
// if the semi is true, then prepend with semicolon
func appendSemi(bs []byte, semi bool, vals ...byte) []byte {
if semi {
bs = append(bs, ';')
}
return append(bs, vals...)
}
func itoa(t byte) string {
var (
a [3]byte
j = 2
)
for i := 0; i < 3; i, j = i+1, j-1 {
a[j] = '0' + t%10
if t = t / 10; t == 0 {
break
}
}
return string(a[j:])
}
func (c Color) appendFg(bs []byte, zero bool) []byte {
if zero || c&maskFm != 0 {
bs = append(bs, ';')
}
// 0- 7 : 30-37
// 8-15 : 90-97
// > 15 : 38;5;val
switch fg := (c & maskFg) >> shiftFg; {
case fg <= 7:
// '3' and the value itself
bs = append(bs, '3', '0'+byte(fg))
case fg <= 15:
// '9' and the value itself
bs = append(bs, '9', '0'+byte(fg&^0x08)) // clear bright flag
default:
bs = append(bs, '3', '8', ';', '5', ';')
bs = append(bs, itoa(byte(fg))...)
}
return bs
}
func (c Color) appendBg(bs []byte, zero bool) []byte {
if zero || c&(maskFm|maskFg) != 0 {
bs = append(bs, ';')
}
// 0- 7 : 40- 47
// 8-15 : 100-107
// > 15 : 48;5;val
switch fg := (c & maskBg) >> shiftBg; {
case fg <= 7:
// '3' and the value itself
bs = append(bs, '4', '0'+byte(fg))
case fg <= 15:
// '1', '0' and the value itself
bs = append(bs, '1', '0', '0'+byte(fg&^0x08)) // clear bright flag
default:
bs = append(bs, '4', '8', ';', '5', ';')
bs = append(bs, itoa(byte(fg))...)
}
return bs
}
func (c Color) appendFm9(bs []byte, zero bool) []byte {
bs = appendCond(bs, c&ItalicFm != 0,
zero || c&(BoldFm|FaintFm) != 0,
'3')
bs = appendCond(bs, c&UnderlineFm != 0,
zero || c&(BoldFm|FaintFm|ItalicFm) != 0,
'4')
// don't combine slow and rapid blink using only
// on of them, preferring slow blink
if c&SlowBlinkFm != 0 {
bs = appendSemi(bs,
zero || c&(BoldFm|FaintFm|ItalicFm|UnderlineFm) != 0,
'5')
} else if c&RapidBlinkFm != 0 {
bs = appendSemi(bs,
zero || c&(BoldFm|FaintFm|ItalicFm|UnderlineFm) != 0,
'6')
}
// including 1-2
const mask6i = BoldFm | FaintFm |
ItalicFm | UnderlineFm |
SlowBlinkFm | RapidBlinkFm
bs = appendCond(bs, c&ReverseFm != 0,
zero || c&(mask6i) != 0,
'7')
bs = appendCond(bs, c&ConcealFm != 0,
zero || c&(mask6i|ReverseFm) != 0,
'8')
bs = appendCond(bs, c&CrossedOutFm != 0,
zero || c&(mask6i|ReverseFm|ConcealFm) != 0,
'9')
return bs
}
// append 1;3;38;5;216 like string that represents ANSI
// color of the Color; the zero argument requires
// appending of '0' before to reset previous format
// and colors
func (c Color) appendNos(bs []byte, zero bool) []byte {
if zero {
bs = append(bs, '0') // reset previous
}
// formats
//
if c&maskFm != 0 {
// 1-2
// don't combine bold and faint using only on of them, preferring bold
if c&BoldFm != 0 {
bs = appendSemi(bs, zero, '1')
} else if c&FaintFm != 0 {
bs = appendSemi(bs, zero, '2')
}
// 3-9
const mask9 = ItalicFm | UnderlineFm |
SlowBlinkFm | RapidBlinkFm |
ReverseFm | ConcealFm | CrossedOutFm
if c&mask9 != 0 {
bs = c.appendFm9(bs, zero)
}
// 20-21
const (
mask21 = FrakturFm | DoublyUnderlineFm
mask9i = BoldFm | FaintFm | mask9
)
if c&mask21 != 0 {
bs = appendCond(bs, c&FrakturFm != 0,
zero || c&mask9i != 0,
'2', '0')
bs = appendCond(bs, c&DoublyUnderlineFm != 0,
zero || c&(mask9i|FrakturFm) != 0,
'2', '1')
}
// 50-53
const (
mask53 = FramedFm | EncircledFm | OverlinedFm
mask21i = mask9i | mask21
)
if c&mask53 != 0 {
bs = appendCond(bs, c&FramedFm != 0,
zero || c&mask21i != 0,
'5', '1')
bs = appendCond(bs, c&EncircledFm != 0,
zero || c&(mask21i|FramedFm) != 0,
'5', '2')
bs = appendCond(bs, c&OverlinedFm != 0,
zero || c&(mask21i|FramedFm|EncircledFm) != 0,
'5', '3')
}
}
// foreground
if c&maskFg != 0 {
bs = c.appendFg(bs, zero)
}
// background
if c&maskBg != 0 {
bs = c.appendBg(bs, zero)
}
return bs
}

BIN
vendor/github.com/logrusorgru/aurora/disable.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 626 B

BIN
vendor/github.com/logrusorgru/aurora/enable.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 B

BIN
vendor/github.com/logrusorgru/aurora/gopher_aurora.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
vendor/github.com/logrusorgru/aurora/printf.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

BIN
vendor/github.com/logrusorgru/aurora/simple.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

68
vendor/github.com/logrusorgru/aurora/sprintf.go generated vendored Normal file
View File

@@ -0,0 +1,68 @@
//
// Copyright (c) 2016-2020 The Aurora Authors. All rights reserved.
// This program is free software. It comes without any warranty,
// to the extent permitted by applicable law. You can redistribute
// it and/or modify it under the terms of the Unlicense. See LICENSE
// file for more details or see below.
//
//
// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <http://unlicense.org/>
//
package aurora
import (
"fmt"
)
// Sprintf allows to use Value as format. For example
//
// v := Sprintf(Red("total: +3.5f points"), Blue(3.14))
//
// In this case "total:" and "points" will be red, but
// 3.14 will be blue. But, in another example
//
// v := Sprintf(Red("total: +3.5f points"), 3.14)
//
// full string will be red. And no way to clear 3.14 to
// default format and color
func Sprintf(format interface{}, args ...interface{}) string {
switch ft := format.(type) {
case string:
return fmt.Sprintf(ft, args...)
case Value:
for i, v := range args {
if val, ok := v.(Value); ok {
args[i] = val.setTail(ft.Color())
continue
}
}
return fmt.Sprintf(ft.String(), args...)
}
// unknown type of format (we hope it's a string)
return fmt.Sprintf(fmt.Sprint(format), args...)
}

BIN
vendor/github.com/logrusorgru/aurora/sprintf.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

745
vendor/github.com/logrusorgru/aurora/value.go generated vendored Normal file
View File

@@ -0,0 +1,745 @@
//
// Copyright (c) 2016-2020 The Aurora Authors. All rights reserved.
// This program is free software. It comes without any warranty,
// to the extent permitted by applicable law. You can redistribute
// it and/or modify it under the terms of the Unlicense. See LICENSE
// file for more details or see below.
//
//
// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <http://unlicense.org/>
//
package aurora
import (
"fmt"
"strconv"
"unicode/utf8"
)
// A Value represents any printable value
// with it's color
type Value interface {
// String returns string with colors. If there are any color
// or format the string will be terminated with \033[0m
fmt.Stringer
// Format implements fmt.Formatter interface
fmt.Formatter
// Color returns value's color
Color() Color
// Value returns value's value (welcome to the tautology club)
Value() interface{}
// internals
tail() Color
setTail(Color) Value
// Bleach returns copy of original value without colors
//
// Deprecated: use Reset instead.
Bleach() Value
// Reset colors and formats
Reset() Value
//
// Formats
//
//
// Bold or increased intensity (1).
Bold() Value
// Faint, decreased intensity, reset the Bold (2).
Faint() Value
//
// DoublyUnderline or Bold off, double-underline
// per ECMA-48 (21). It depends.
DoublyUnderline() Value
// Fraktur, rarely supported (20).
Fraktur() Value
//
// Italic, not widely supported, sometimes
// treated as inverse (3).
Italic() Value
// Underline (4).
Underline() Value
//
// SlowBlink, blinking less than 150
// per minute (5).
SlowBlink() Value
// RapidBlink, blinking 150+ per minute,
// not widely supported (6).
RapidBlink() Value
// Blink is alias for the SlowBlink.
Blink() Value
//
// Reverse video, swap foreground and
// background colors (7).
Reverse() Value
// Inverse is alias for the Reverse
Inverse() Value
//
// Conceal, hidden, not widely supported (8).
Conceal() Value
// Hidden is alias for the Conceal
Hidden() Value
//
// CrossedOut, characters legible, but
// marked for deletion (9).
CrossedOut() Value
// StrikeThrough is alias for the CrossedOut.
StrikeThrough() Value
//
// Framed (51).
Framed() Value
// Encircled (52).
Encircled() Value
//
// Overlined (53).
Overlined() Value
//
// Foreground colors
//
//
// Black foreground color (30)
Black() Value
// Red foreground color (31)
Red() Value
// Green foreground color (32)
Green() Value
// Yellow foreground color (33)
Yellow() Value
// Brown foreground color (33)
//
// Deprecated: use Yellow instead, following specification
Brown() Value
// Blue foreground color (34)
Blue() Value
// Magenta foreground color (35)
Magenta() Value
// Cyan foreground color (36)
Cyan() Value
// White foreground color (37)
White() Value
//
// Bright foreground colors
//
// BrightBlack foreground color (90)
BrightBlack() Value
// BrightRed foreground color (91)
BrightRed() Value
// BrightGreen foreground color (92)
BrightGreen() Value
// BrightYellow foreground color (93)
BrightYellow() Value
// BrightBlue foreground color (94)
BrightBlue() Value
// BrightMagenta foreground color (95)
BrightMagenta() Value
// BrightCyan foreground color (96)
BrightCyan() Value
// BrightWhite foreground color (97)
BrightWhite() Value
//
// Other
//
// Index of pre-defined 8-bit foreground color
// from 0 to 255 (38;5;n).
//
// 0- 7: standard colors (as in ESC [ 3037 m)
// 8- 15: high intensity colors (as in ESC [ 9097 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
Index(n uint8) Value
// Gray from 0 to 24.
Gray(n uint8) Value
//
// Background colors
//
//
// BgBlack background color (40)
BgBlack() Value
// BgRed background color (41)
BgRed() Value
// BgGreen background color (42)
BgGreen() Value
// BgYellow background color (43)
BgYellow() Value
// BgBrown background color (43)
//
// Deprecated: use BgYellow instead, following specification
BgBrown() Value
// BgBlue background color (44)
BgBlue() Value
// BgMagenta background color (45)
BgMagenta() Value
// BgCyan background color (46)
BgCyan() Value
// BgWhite background color (47)
BgWhite() Value
//
// Bright background colors
//
// BgBrightBlack background color (100)
BgBrightBlack() Value
// BgBrightRed background color (101)
BgBrightRed() Value
// BgBrightGreen background color (102)
BgBrightGreen() Value
// BgBrightYellow background color (103)
BgBrightYellow() Value
// BgBrightBlue background color (104)
BgBrightBlue() Value
// BgBrightMagenta background color (105)
BgBrightMagenta() Value
// BgBrightCyan background color (106)
BgBrightCyan() Value
// BgBrightWhite background color (107)
BgBrightWhite() Value
//
// Other
//
// BgIndex of 8-bit pre-defined background color
// from 0 to 255 (48;5;n).
//
// 0- 7: standard colors (as in ESC [ 4047 m)
// 8- 15: high intensity colors (as in ESC [100107 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
BgIndex(n uint8) Value
// BgGray from 0 to 24.
BgGray(n uint8) Value
//
// Special
//
// Colorize removes existing colors and
// formats of the argument and applies given.
Colorize(color Color) Value
}
// Value without colors
type valueClear struct {
value interface{}
}
func (vc valueClear) String() string { return fmt.Sprint(vc.value) }
func (vc valueClear) Color() Color { return 0 }
func (vc valueClear) Value() interface{} { return vc.value }
func (vc valueClear) tail() Color { return 0 }
func (vc valueClear) setTail(Color) Value { return vc }
func (vc valueClear) Bleach() Value { return vc }
func (vc valueClear) Reset() Value { return vc }
func (vc valueClear) Bold() Value { return vc }
func (vc valueClear) Faint() Value { return vc }
func (vc valueClear) DoublyUnderline() Value { return vc }
func (vc valueClear) Fraktur() Value { return vc }
func (vc valueClear) Italic() Value { return vc }
func (vc valueClear) Underline() Value { return vc }
func (vc valueClear) SlowBlink() Value { return vc }
func (vc valueClear) RapidBlink() Value { return vc }
func (vc valueClear) Blink() Value { return vc }
func (vc valueClear) Reverse() Value { return vc }
func (vc valueClear) Inverse() Value { return vc }
func (vc valueClear) Conceal() Value { return vc }
func (vc valueClear) Hidden() Value { return vc }
func (vc valueClear) CrossedOut() Value { return vc }
func (vc valueClear) StrikeThrough() Value { return vc }
func (vc valueClear) Framed() Value { return vc }
func (vc valueClear) Encircled() Value { return vc }
func (vc valueClear) Overlined() Value { return vc }
func (vc valueClear) Black() Value { return vc }
func (vc valueClear) Red() Value { return vc }
func (vc valueClear) Green() Value { return vc }
func (vc valueClear) Yellow() Value { return vc }
func (vc valueClear) Brown() Value { return vc }
func (vc valueClear) Blue() Value { return vc }
func (vc valueClear) Magenta() Value { return vc }
func (vc valueClear) Cyan() Value { return vc }
func (vc valueClear) White() Value { return vc }
func (vc valueClear) BrightBlack() Value { return vc }
func (vc valueClear) BrightRed() Value { return vc }
func (vc valueClear) BrightGreen() Value { return vc }
func (vc valueClear) BrightYellow() Value { return vc }
func (vc valueClear) BrightBlue() Value { return vc }
func (vc valueClear) BrightMagenta() Value { return vc }
func (vc valueClear) BrightCyan() Value { return vc }
func (vc valueClear) BrightWhite() Value { return vc }
func (vc valueClear) Index(uint8) Value { return vc }
func (vc valueClear) Gray(uint8) Value { return vc }
func (vc valueClear) BgBlack() Value { return vc }
func (vc valueClear) BgRed() Value { return vc }
func (vc valueClear) BgGreen() Value { return vc }
func (vc valueClear) BgYellow() Value { return vc }
func (vc valueClear) BgBrown() Value { return vc }
func (vc valueClear) BgBlue() Value { return vc }
func (vc valueClear) BgMagenta() Value { return vc }
func (vc valueClear) BgCyan() Value { return vc }
func (vc valueClear) BgWhite() Value { return vc }
func (vc valueClear) BgBrightBlack() Value { return vc }
func (vc valueClear) BgBrightRed() Value { return vc }
func (vc valueClear) BgBrightGreen() Value { return vc }
func (vc valueClear) BgBrightYellow() Value { return vc }
func (vc valueClear) BgBrightBlue() Value { return vc }
func (vc valueClear) BgBrightMagenta() Value { return vc }
func (vc valueClear) BgBrightCyan() Value { return vc }
func (vc valueClear) BgBrightWhite() Value { return vc }
func (vc valueClear) BgIndex(uint8) Value { return vc }
func (vc valueClear) BgGray(uint8) Value { return vc }
func (vc valueClear) Colorize(Color) Value { return vc }
func (vc valueClear) Format(s fmt.State, verb rune) {
// it's enough for many cases (%-+020.10f)
// % - 1
// availFlags - 3 (5)
// width - 2
// prec - 3 (.23)
// verb - 1
// --------------
// 10
format := make([]byte, 1, 10)
format[0] = '%'
var f byte
for i := 0; i < len(availFlags); i++ {
if f = availFlags[i]; s.Flag(int(f)) {
format = append(format, f)
}
}
var width, prec int
var ok bool
if width, ok = s.Width(); ok {
format = strconv.AppendInt(format, int64(width), 10)
}
if prec, ok = s.Precision(); ok {
format = append(format, '.')
format = strconv.AppendInt(format, int64(prec), 10)
}
if verb > utf8.RuneSelf {
format = append(format, string(verb)...)
} else {
format = append(format, byte(verb))
}
fmt.Fprintf(s, string(format), vc.value)
}
// Value within colors
type value struct {
value interface{} // value as it
color Color // this color
tailColor Color // tail color
}
func (v value) String() string {
if v.color != 0 {
if v.tailColor != 0 {
return esc + v.color.Nos(true) + "m" +
fmt.Sprint(v.value) +
esc + v.tailColor.Nos(true) + "m"
}
return esc + v.color.Nos(false) + "m" + fmt.Sprint(v.value) + clear
}
return fmt.Sprint(v.value)
}
func (v value) Color() Color { return v.color }
func (v value) Bleach() Value {
v.color, v.tailColor = 0, 0
return v
}
func (v value) Reset() Value {
v.color, v.tailColor = 0, 0
return v
}
func (v value) tail() Color { return v.tailColor }
func (v value) setTail(t Color) Value {
v.tailColor = t
return v
}
func (v value) Value() interface{} { return v.value }
func (v value) Format(s fmt.State, verb rune) {
// it's enough for many cases (%-+020.10f)
// % - 1
// availFlags - 3 (5)
// width - 2
// prec - 3 (.23)
// verb - 1
// --------------
// 10
// +
// \033[ 5
// 0;1;3;4;5;7;8;9;20;21;51;52;53 30
// 38;5;216 8
// 48;5;216 8
// m 1
// +
// \033[0m 7
//
// x2 (possible tail color)
//
// 10 + 59 * 2 = 128
format := make([]byte, 0, 128)
if v.color != 0 {
format = append(format, esc...)
format = v.color.appendNos(format, v.tailColor != 0)
format = append(format, 'm')
}
format = append(format, '%')
var f byte
for i := 0; i < len(availFlags); i++ {
if f = availFlags[i]; s.Flag(int(f)) {
format = append(format, f)
}
}
var width, prec int
var ok bool
if width, ok = s.Width(); ok {
format = strconv.AppendInt(format, int64(width), 10)
}
if prec, ok = s.Precision(); ok {
format = append(format, '.')
format = strconv.AppendInt(format, int64(prec), 10)
}
if verb > utf8.RuneSelf {
format = append(format, string(verb)...)
} else {
format = append(format, byte(verb))
}
if v.color != 0 {
if v.tailColor != 0 {
// set next (previous) format clearing current one
format = append(format, esc...)
format = v.tailColor.appendNos(format, true)
format = append(format, 'm')
} else {
format = append(format, clear...) // just clear
}
}
fmt.Fprintf(s, string(format), v.value)
}
func (v value) Bold() Value {
v.color = (v.color &^ FaintFm) | BoldFm
return v
}
func (v value) Faint() Value {
v.color = (v.color &^ BoldFm) | FaintFm
return v
}
func (v value) DoublyUnderline() Value {
v.color |= DoublyUnderlineFm
return v
}
func (v value) Fraktur() Value {
v.color |= FrakturFm
return v
}
func (v value) Italic() Value {
v.color |= ItalicFm
return v
}
func (v value) Underline() Value {
v.color |= UnderlineFm
return v
}
func (v value) SlowBlink() Value {
v.color = (v.color &^ RapidBlinkFm) | SlowBlinkFm
return v
}
func (v value) RapidBlink() Value {
v.color = (v.color &^ SlowBlinkFm) | RapidBlinkFm
return v
}
func (v value) Blink() Value {
return v.SlowBlink()
}
func (v value) Reverse() Value {
v.color |= ReverseFm
return v
}
func (v value) Inverse() Value {
return v.Reverse()
}
func (v value) Conceal() Value {
v.color |= ConcealFm
return v
}
func (v value) Hidden() Value {
return v.Conceal()
}
func (v value) CrossedOut() Value {
v.color |= CrossedOutFm
return v
}
func (v value) StrikeThrough() Value {
return v.CrossedOut()
}
func (v value) Framed() Value {
v.color |= FramedFm
return v
}
func (v value) Encircled() Value {
v.color |= EncircledFm
return v
}
func (v value) Overlined() Value {
v.color |= OverlinedFm
return v
}
func (v value) Black() Value {
v.color = (v.color &^ maskFg) | BlackFg
return v
}
func (v value) Red() Value {
v.color = (v.color &^ maskFg) | RedFg
return v
}
func (v value) Green() Value {
v.color = (v.color &^ maskFg) | GreenFg
return v
}
func (v value) Yellow() Value {
v.color = (v.color &^ maskFg) | YellowFg
return v
}
func (v value) Brown() Value {
return v.Yellow()
}
func (v value) Blue() Value {
v.color = (v.color &^ maskFg) | BlueFg
return v
}
func (v value) Magenta() Value {
v.color = (v.color &^ maskFg) | MagentaFg
return v
}
func (v value) Cyan() Value {
v.color = (v.color &^ maskFg) | CyanFg
return v
}
func (v value) White() Value {
v.color = (v.color &^ maskFg) | WhiteFg
return v
}
func (v value) BrightBlack() Value {
v.color = (v.color &^ maskFg) | BrightFg | BlackFg
return v
}
func (v value) BrightRed() Value {
v.color = (v.color &^ maskFg) | BrightFg | RedFg
return v
}
func (v value) BrightGreen() Value {
v.color = (v.color &^ maskFg) | BrightFg | GreenFg
return v
}
func (v value) BrightYellow() Value {
v.color = (v.color &^ maskFg) | BrightFg | YellowFg
return v
}
func (v value) BrightBlue() Value {
v.color = (v.color &^ maskFg) | BrightFg | BlueFg
return v
}
func (v value) BrightMagenta() Value {
v.color = (v.color &^ maskFg) | BrightFg | MagentaFg
return v
}
func (v value) BrightCyan() Value {
v.color = (v.color &^ maskFg) | BrightFg | CyanFg
return v
}
func (v value) BrightWhite() Value {
v.color = (v.color &^ maskFg) | BrightFg | WhiteFg
return v
}
func (v value) Index(n uint8) Value {
v.color = (v.color &^ maskFg) | (Color(n) << shiftFg) | flagFg
return v
}
func (v value) Gray(n uint8) Value {
if n > 23 {
n = 23
}
v.color = (v.color &^ maskFg) | (Color(232+n) << shiftFg) | flagFg
return v
}
func (v value) BgBlack() Value {
v.color = (v.color &^ maskBg) | BlackBg
return v
}
func (v value) BgRed() Value {
v.color = (v.color &^ maskBg) | RedBg
return v
}
func (v value) BgGreen() Value {
v.color = (v.color &^ maskBg) | GreenBg
return v
}
func (v value) BgYellow() Value {
v.color = (v.color &^ maskBg) | YellowBg
return v
}
func (v value) BgBrown() Value {
return v.BgYellow()
}
func (v value) BgBlue() Value {
v.color = (v.color &^ maskBg) | BlueBg
return v
}
func (v value) BgMagenta() Value {
v.color = (v.color &^ maskBg) | MagentaBg
return v
}
func (v value) BgCyan() Value {
v.color = (v.color &^ maskBg) | CyanBg
return v
}
func (v value) BgWhite() Value {
v.color = (v.color &^ maskBg) | WhiteBg
return v
}
func (v value) BgBrightBlack() Value {
v.color = (v.color &^ maskBg) | BrightBg | BlackBg
return v
}
func (v value) BgBrightRed() Value {
v.color = (v.color &^ maskBg) | BrightBg | RedBg
return v
}
func (v value) BgBrightGreen() Value {
v.color = (v.color &^ maskBg) | BrightBg | GreenBg
return v
}
func (v value) BgBrightYellow() Value {
v.color = (v.color &^ maskBg) | BrightBg | YellowBg
return v
}
func (v value) BgBrightBlue() Value {
v.color = (v.color &^ maskBg) | BrightBg | BlueBg
return v
}
func (v value) BgBrightMagenta() Value {
v.color = (v.color &^ maskBg) | BrightBg | MagentaBg
return v
}
func (v value) BgBrightCyan() Value {
v.color = (v.color &^ maskBg) | BrightBg | CyanBg
return v
}
func (v value) BgBrightWhite() Value {
v.color = (v.color &^ maskBg) | BrightBg | WhiteBg
return v
}
func (v value) BgIndex(n uint8) Value {
v.color = (v.color &^ maskBg) | (Color(n) << shiftBg) | flagBg
return v
}
func (v value) BgGray(n uint8) Value {
if n > 23 {
n = 23
}
v.color = (v.color &^ maskBg) | (Color(232+n) << shiftBg) | flagBg
return v
}
func (v value) Colorize(color Color) Value {
v.color = color
return v
}

558
vendor/github.com/logrusorgru/aurora/wrap.go generated vendored Normal file
View File

@@ -0,0 +1,558 @@
//
// Copyright (c) 2016-2020 The Aurora Authors. All rights reserved.
// This program is free software. It comes without any warranty,
// to the extent permitted by applicable law. You can redistribute
// it and/or modify it under the terms of the Unlicense. See LICENSE
// file for more details or see below.
//
//
// This is free and unencumbered software released into the public domain.
//
// Anyone is free to copy, modify, publish, use, compile, sell, or
// distribute this software, either in source code form or as a compiled
// binary, for any purpose, commercial or non-commercial, and by any
// means.
//
// In jurisdictions that recognize copyright laws, the author or authors
// of this software dedicate any and all copyright interest in the
// software to the public domain. We make this dedication for the benefit
// of the public at large and to the detriment of our heirs and
// successors. We intend this dedication to be an overt act of
// relinquishment in perpetuity of all present and future rights to this
// software under copyright law.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
// IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
// OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
// ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
// For more information, please refer to <http://unlicense.org/>
//
package aurora
// Colorize wraps given value into Value with
// given colors. For example
//
// s := Colorize("some", BlueFg|GreenBg|BoldFm)
//
// returns a Value with blue foreground, green
// background and bold. Unlike functions like
// Red/BgBlue/Bold etc. This function clears
// all previous colors and formats. Thus
//
// s := Colorize(Red("some"), BgBlue)
//
// clears red color from value
func Colorize(arg interface{}, color Color) Value {
if val, ok := arg.(value); ok {
val.color = color
return val
}
return value{arg, color, 0}
}
// Reset wraps given argument returning Value
// without formats and colors.
func Reset(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Reset()
}
return value{value: arg}
}
//
// Formats
//
// Bold or increased intensity (1).
func Bold(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Bold()
}
return value{value: arg, color: BoldFm}
}
// Faint decreases intensity (2).
// The Faint rejects the Bold.
func Faint(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Faint()
}
return value{value: arg, color: FaintFm}
}
// DoublyUnderline or Bold off, double-underline
// per ECMA-48 (21).
func DoublyUnderline(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.DoublyUnderline()
}
return value{value: arg, color: DoublyUnderlineFm}
}
// Fraktur is rarely supported (20).
func Fraktur(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Fraktur()
}
return value{value: arg, color: FrakturFm}
}
// Italic is not widely supported, sometimes
// treated as inverse (3).
func Italic(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Italic()
}
return value{value: arg, color: ItalicFm}
}
// Underline (4).
func Underline(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Underline()
}
return value{value: arg, color: UnderlineFm}
}
// SlowBlink makes text blink less than
// 150 per minute (5).
func SlowBlink(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.SlowBlink()
}
return value{value: arg, color: SlowBlinkFm}
}
// RapidBlink makes text blink 150+ per
// minute. It is not widely supported (6).
func RapidBlink(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.RapidBlink()
}
return value{value: arg, color: RapidBlinkFm}
}
// Blink is alias for the SlowBlink.
func Blink(arg interface{}) Value {
return SlowBlink(arg)
}
// Reverse video, swap foreground and
// background colors (7).
func Reverse(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Reverse()
}
return value{value: arg, color: ReverseFm}
}
// Inverse is alias for the Reverse
func Inverse(arg interface{}) Value {
return Reverse(arg)
}
// Conceal hides text, preserving an ability to select
// the text and copy it. It is not widely supported (8).
func Conceal(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Conceal()
}
return value{value: arg, color: ConcealFm}
}
// Hidden is alias for the Conceal
func Hidden(arg interface{}) Value {
return Conceal(arg)
}
// CrossedOut makes characters legible, but
// marked for deletion (9).
func CrossedOut(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.CrossedOut()
}
return value{value: arg, color: CrossedOutFm}
}
// StrikeThrough is alias for the CrossedOut.
func StrikeThrough(arg interface{}) Value {
return CrossedOut(arg)
}
// Framed (51).
func Framed(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Framed()
}
return value{value: arg, color: FramedFm}
}
// Encircled (52).
func Encircled(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Encircled()
}
return value{value: arg, color: EncircledFm}
}
// Overlined (53).
func Overlined(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Overlined()
}
return value{value: arg, color: OverlinedFm}
}
//
// Foreground colors
//
//
// Black foreground color (30)
func Black(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Black()
}
return value{value: arg, color: BlackFg}
}
// Red foreground color (31)
func Red(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Red()
}
return value{value: arg, color: RedFg}
}
// Green foreground color (32)
func Green(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Green()
}
return value{value: arg, color: GreenFg}
}
// Yellow foreground color (33)
func Yellow(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Yellow()
}
return value{value: arg, color: YellowFg}
}
// Brown foreground color (33)
//
// Deprecated: use Yellow instead, following specification
func Brown(arg interface{}) Value {
return Yellow(arg)
}
// Blue foreground color (34)
func Blue(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Blue()
}
return value{value: arg, color: BlueFg}
}
// Magenta foreground color (35)
func Magenta(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Magenta()
}
return value{value: arg, color: MagentaFg}
}
// Cyan foreground color (36)
func Cyan(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Cyan()
}
return value{value: arg, color: CyanFg}
}
// White foreground color (37)
func White(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.White()
}
return value{value: arg, color: WhiteFg}
}
//
// Bright foreground colors
//
// BrightBlack foreground color (90)
func BrightBlack(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightBlack()
}
return value{value: arg, color: BrightFg | BlackFg}
}
// BrightRed foreground color (91)
func BrightRed(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightRed()
}
return value{value: arg, color: BrightFg | RedFg}
}
// BrightGreen foreground color (92)
func BrightGreen(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightGreen()
}
return value{value: arg, color: BrightFg | GreenFg}
}
// BrightYellow foreground color (93)
func BrightYellow(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightYellow()
}
return value{value: arg, color: BrightFg | YellowFg}
}
// BrightBlue foreground color (94)
func BrightBlue(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightBlue()
}
return value{value: arg, color: BrightFg | BlueFg}
}
// BrightMagenta foreground color (95)
func BrightMagenta(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightMagenta()
}
return value{value: arg, color: BrightFg | MagentaFg}
}
// BrightCyan foreground color (96)
func BrightCyan(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightCyan()
}
return value{value: arg, color: BrightFg | CyanFg}
}
// BrightWhite foreground color (97)
func BrightWhite(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BrightWhite()
}
return value{value: arg, color: BrightFg | WhiteFg}
}
//
// Other
//
// Index of pre-defined 8-bit foreground color
// from 0 to 255 (38;5;n).
//
// 0- 7: standard colors (as in ESC [ 3037 m)
// 8- 15: high intensity colors (as in ESC [ 9097 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
func Index(n uint8, arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Index(n)
}
return value{value: arg, color: (Color(n) << shiftFg) | flagFg}
}
// Gray from 0 to 24.
func Gray(n uint8, arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.Gray(n)
}
if n > 23 {
n = 23
}
return value{value: arg, color: (Color(232+n) << shiftFg) | flagFg}
}
//
// Background colors
//
//
// BgBlack background color (40)
func BgBlack(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBlack()
}
return value{value: arg, color: BlackBg}
}
// BgRed background color (41)
func BgRed(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgRed()
}
return value{value: arg, color: RedBg}
}
// BgGreen background color (42)
func BgGreen(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgGreen()
}
return value{value: arg, color: GreenBg}
}
// BgYellow background color (43)
func BgYellow(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgYellow()
}
return value{value: arg, color: YellowBg}
}
// BgBrown background color (43)
//
// Deprecated: use BgYellow instead, following specification
func BgBrown(arg interface{}) Value {
return BgYellow(arg)
}
// BgBlue background color (44)
func BgBlue(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBlue()
}
return value{value: arg, color: BlueBg}
}
// BgMagenta background color (45)
func BgMagenta(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgMagenta()
}
return value{value: arg, color: MagentaBg}
}
// BgCyan background color (46)
func BgCyan(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgCyan()
}
return value{value: arg, color: CyanBg}
}
// BgWhite background color (47)
func BgWhite(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgWhite()
}
return value{value: arg, color: WhiteBg}
}
//
// Bright background colors
//
// BgBrightBlack background color (100)
func BgBrightBlack(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightBlack()
}
return value{value: arg, color: BrightBg | BlackBg}
}
// BgBrightRed background color (101)
func BgBrightRed(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightRed()
}
return value{value: arg, color: BrightBg | RedBg}
}
// BgBrightGreen background color (102)
func BgBrightGreen(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightGreen()
}
return value{value: arg, color: BrightBg | GreenBg}
}
// BgBrightYellow background color (103)
func BgBrightYellow(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightYellow()
}
return value{value: arg, color: BrightBg | YellowBg}
}
// BgBrightBlue background color (104)
func BgBrightBlue(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightBlue()
}
return value{value: arg, color: BrightBg | BlueBg}
}
// BgBrightMagenta background color (105)
func BgBrightMagenta(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightMagenta()
}
return value{value: arg, color: BrightBg | MagentaBg}
}
// BgBrightCyan background color (106)
func BgBrightCyan(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightCyan()
}
return value{value: arg, color: BrightBg | CyanBg}
}
// BgBrightWhite background color (107)
func BgBrightWhite(arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgBrightWhite()
}
return value{value: arg, color: BrightBg | WhiteBg}
}
//
// Other
//
// BgIndex of 8-bit pre-defined background color
// from 0 to 255 (48;5;n).
//
// 0- 7: standard colors (as in ESC [ 4047 m)
// 8- 15: high intensity colors (as in ESC [100107 m)
// 16-231: 6 × 6 × 6 cube (216 colors): 16 + 36 × r + 6 × g + b (0 ≤ r, g, b ≤ 5)
// 232-255: grayscale from black to white in 24 steps
//
func BgIndex(n uint8, arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgIndex(n)
}
return value{value: arg, color: (Color(n) << shiftBg) | flagBg}
}
// BgGray from 0 to 24.
func BgGray(n uint8, arg interface{}) Value {
if val, ok := arg.(Value); ok {
return val.BgGray(n)
}
if n > 23 {
n = 23
}
return value{value: arg, color: (Color(n+232) << shiftBg) | flagBg}
}

27
vendor/github.com/pmezard/go-difflib/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2013, Patrick Mezard
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
The names of its contributors may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

772
vendor/github.com/pmezard/go-difflib/difflib/difflib.go generated vendored Normal file
View File

@@ -0,0 +1,772 @@
// Package difflib is a partial port of Python difflib module.
//
// It provides tools to compare sequences of strings and generate textual diffs.
//
// The following class and functions have been ported:
//
// - SequenceMatcher
//
// - unified_diff
//
// - context_diff
//
// Getting unified diffs was the main goal of the port. Keep in mind this code
// is mostly suitable to output text differences in a human friendly way, there
// are no guarantees generated diffs are consumable by patch(1).
package difflib
import (
"bufio"
"bytes"
"fmt"
"io"
"strings"
)
func min(a, b int) int {
if a < b {
return a
}
return b
}
func max(a, b int) int {
if a > b {
return a
}
return b
}
func calculateRatio(matches, length int) float64 {
if length > 0 {
return 2.0 * float64(matches) / float64(length)
}
return 1.0
}
type Match struct {
A int
B int
Size int
}
type OpCode struct {
Tag byte
I1 int
I2 int
J1 int
J2 int
}
// SequenceMatcher compares sequence of strings. The basic
// algorithm predates, and is a little fancier than, an algorithm
// published in the late 1980's by Ratcliff and Obershelp under the
// hyperbolic name "gestalt pattern matching". The basic idea is to find
// the longest contiguous matching subsequence that contains no "junk"
// elements (R-O doesn't address junk). The same idea is then applied
// recursively to the pieces of the sequences to the left and to the right
// of the matching subsequence. This does not yield minimal edit
// sequences, but does tend to yield matches that "look right" to people.
//
// SequenceMatcher tries to compute a "human-friendly diff" between two
// sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the
// longest *contiguous* & junk-free matching subsequence. That's what
// catches peoples' eyes. The Windows(tm) windiff has another interesting
// notion, pairing up elements that appear uniquely in each sequence.
// That, and the method here, appear to yield more intuitive difference
// reports than does diff. This method appears to be the least vulnerable
// to synching up on blocks of "junk lines", though (like blank lines in
// ordinary text files, or maybe "<P>" lines in HTML files). That may be
// because this is the only method of the 3 that has a *concept* of
// "junk" <wink>.
//
// Timing: Basic R-O is cubic time worst case and quadratic time expected
// case. SequenceMatcher is quadratic time for the worst case and has
// expected-case behavior dependent in a complicated way on how many
// elements the sequences have in common; best case time is linear.
type SequenceMatcher struct {
a []string
b []string
b2j map[string][]int
IsJunk func(string) bool
autoJunk bool
bJunk map[string]struct{}
matchingBlocks []Match
fullBCount map[string]int
bPopular map[string]struct{}
opCodes []OpCode
}
func NewMatcher(a, b []string) *SequenceMatcher {
m := SequenceMatcher{autoJunk: true}
m.SetSeqs(a, b)
return &m
}
func NewMatcherWithJunk(a, b []string, autoJunk bool,
isJunk func(string) bool) *SequenceMatcher {
m := SequenceMatcher{IsJunk: isJunk, autoJunk: autoJunk}
m.SetSeqs(a, b)
return &m
}
// Set two sequences to be compared.
func (m *SequenceMatcher) SetSeqs(a, b []string) {
m.SetSeq1(a)
m.SetSeq2(b)
}
// Set the first sequence to be compared. The second sequence to be compared is
// not changed.
//
// SequenceMatcher computes and caches detailed information about the second
// sequence, so if you want to compare one sequence S against many sequences,
// use .SetSeq2(s) once and call .SetSeq1(x) repeatedly for each of the other
// sequences.
//
// See also SetSeqs() and SetSeq2().
func (m *SequenceMatcher) SetSeq1(a []string) {
if &a == &m.a {
return
}
m.a = a
m.matchingBlocks = nil
m.opCodes = nil
}
// Set the second sequence to be compared. The first sequence to be compared is
// not changed.
func (m *SequenceMatcher) SetSeq2(b []string) {
if &b == &m.b {
return
}
m.b = b
m.matchingBlocks = nil
m.opCodes = nil
m.fullBCount = nil
m.chainB()
}
func (m *SequenceMatcher) chainB() {
// Populate line -> index mapping
b2j := map[string][]int{}
for i, s := range m.b {
indices := b2j[s]
indices = append(indices, i)
b2j[s] = indices
}
// Purge junk elements
m.bJunk = map[string]struct{}{}
if m.IsJunk != nil {
junk := m.bJunk
for s, _ := range b2j {
if m.IsJunk(s) {
junk[s] = struct{}{}
}
}
for s, _ := range junk {
delete(b2j, s)
}
}
// Purge remaining popular elements
popular := map[string]struct{}{}
n := len(m.b)
if m.autoJunk && n >= 200 {
ntest := n/100 + 1
for s, indices := range b2j {
if len(indices) > ntest {
popular[s] = struct{}{}
}
}
for s, _ := range popular {
delete(b2j, s)
}
}
m.bPopular = popular
m.b2j = b2j
}
func (m *SequenceMatcher) isBJunk(s string) bool {
_, ok := m.bJunk[s]
return ok
}
// Find longest matching block in a[alo:ahi] and b[blo:bhi].
//
// If IsJunk is not defined:
//
// Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
// alo <= i <= i+k <= ahi
// blo <= j <= j+k <= bhi
// and for all (i',j',k') meeting those conditions,
// k >= k'
// i <= i'
// and if i == i', j <= j'
//
// In other words, of all maximal matching blocks, return one that
// starts earliest in a, and of all those maximal matching blocks that
// start earliest in a, return the one that starts earliest in b.
//
// If IsJunk is defined, first the longest matching block is
// determined as above, but with the additional restriction that no
// junk element appears in the block. Then that block is extended as
// far as possible by matching (only) junk elements on both sides. So
// the resulting block never matches on junk except as identical junk
// happens to be adjacent to an "interesting" match.
//
// If no blocks match, return (alo, blo, 0).
func (m *SequenceMatcher) findLongestMatch(alo, ahi, blo, bhi int) Match {
// CAUTION: stripping common prefix or suffix would be incorrect.
// E.g.,
// ab
// acab
// Longest matching block is "ab", but if common prefix is
// stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
// strip, so ends up claiming that ab is changed to acab by
// inserting "ca" in the middle. That's minimal but unintuitive:
// "it's obvious" that someone inserted "ac" at the front.
// Windiff ends up at the same place as diff, but by pairing up
// the unique 'b's and then matching the first two 'a's.
besti, bestj, bestsize := alo, blo, 0
// find longest junk-free match
// during an iteration of the loop, j2len[j] = length of longest
// junk-free match ending with a[i-1] and b[j]
j2len := map[int]int{}
for i := alo; i != ahi; i++ {
// look at all instances of a[i] in b; note that because
// b2j has no junk keys, the loop is skipped if a[i] is junk
newj2len := map[int]int{}
for _, j := range m.b2j[m.a[i]] {
// a[i] matches b[j]
if j < blo {
continue
}
if j >= bhi {
break
}
k := j2len[j-1] + 1
newj2len[j] = k
if k > bestsize {
besti, bestj, bestsize = i-k+1, j-k+1, k
}
}
j2len = newj2len
}
// Extend the best by non-junk elements on each end. In particular,
// "popular" non-junk elements aren't in b2j, which greatly speeds
// the inner loop above, but also means "the best" match so far
// doesn't contain any junk *or* popular non-junk elements.
for besti > alo && bestj > blo && !m.isBJunk(m.b[bestj-1]) &&
m.a[besti-1] == m.b[bestj-1] {
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
}
for besti+bestsize < ahi && bestj+bestsize < bhi &&
!m.isBJunk(m.b[bestj+bestsize]) &&
m.a[besti+bestsize] == m.b[bestj+bestsize] {
bestsize += 1
}
// Now that we have a wholly interesting match (albeit possibly
// empty!), we may as well suck up the matching junk on each
// side of it too. Can't think of a good reason not to, and it
// saves post-processing the (possibly considerable) expense of
// figuring out what to do with it. In the case of an empty
// interesting match, this is clearly the right thing to do,
// because no other kind of match is possible in the regions.
for besti > alo && bestj > blo && m.isBJunk(m.b[bestj-1]) &&
m.a[besti-1] == m.b[bestj-1] {
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
}
for besti+bestsize < ahi && bestj+bestsize < bhi &&
m.isBJunk(m.b[bestj+bestsize]) &&
m.a[besti+bestsize] == m.b[bestj+bestsize] {
bestsize += 1
}
return Match{A: besti, B: bestj, Size: bestsize}
}
// Return list of triples describing matching subsequences.
//
// Each triple is of the form (i, j, n), and means that
// a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
// i and in j. It's also guaranteed that if (i, j, n) and (i', j', n') are
// adjacent triples in the list, and the second is not the last triple in the
// list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe
// adjacent equal blocks.
//
// The last triple is a dummy, (len(a), len(b), 0), and is the only
// triple with n==0.
func (m *SequenceMatcher) GetMatchingBlocks() []Match {
if m.matchingBlocks != nil {
return m.matchingBlocks
}
var matchBlocks func(alo, ahi, blo, bhi int, matched []Match) []Match
matchBlocks = func(alo, ahi, blo, bhi int, matched []Match) []Match {
match := m.findLongestMatch(alo, ahi, blo, bhi)
i, j, k := match.A, match.B, match.Size
if match.Size > 0 {
if alo < i && blo < j {
matched = matchBlocks(alo, i, blo, j, matched)
}
matched = append(matched, match)
if i+k < ahi && j+k < bhi {
matched = matchBlocks(i+k, ahi, j+k, bhi, matched)
}
}
return matched
}
matched := matchBlocks(0, len(m.a), 0, len(m.b), nil)
// It's possible that we have adjacent equal blocks in the
// matching_blocks list now.
nonAdjacent := []Match{}
i1, j1, k1 := 0, 0, 0
for _, b := range matched {
// Is this block adjacent to i1, j1, k1?
i2, j2, k2 := b.A, b.B, b.Size
if i1+k1 == i2 && j1+k1 == j2 {
// Yes, so collapse them -- this just increases the length of
// the first block by the length of the second, and the first
// block so lengthened remains the block to compare against.
k1 += k2
} else {
// Not adjacent. Remember the first block (k1==0 means it's
// the dummy we started with), and make the second block the
// new block to compare against.
if k1 > 0 {
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
}
i1, j1, k1 = i2, j2, k2
}
}
if k1 > 0 {
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
}
nonAdjacent = append(nonAdjacent, Match{len(m.a), len(m.b), 0})
m.matchingBlocks = nonAdjacent
return m.matchingBlocks
}
// Return list of 5-tuples describing how to turn a into b.
//
// Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
// has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
// tuple preceding it, and likewise for j1 == the previous j2.
//
// The tags are characters, with these meanings:
//
// 'r' (replace): a[i1:i2] should be replaced by b[j1:j2]
//
// 'd' (delete): a[i1:i2] should be deleted, j1==j2 in this case.
//
// 'i' (insert): b[j1:j2] should be inserted at a[i1:i1], i1==i2 in this case.
//
// 'e' (equal): a[i1:i2] == b[j1:j2]
func (m *SequenceMatcher) GetOpCodes() []OpCode {
if m.opCodes != nil {
return m.opCodes
}
i, j := 0, 0
matching := m.GetMatchingBlocks()
opCodes := make([]OpCode, 0, len(matching))
for _, m := range matching {
// invariant: we've pumped out correct diffs to change
// a[:i] into b[:j], and the next matching block is
// a[ai:ai+size] == b[bj:bj+size]. So we need to pump
// out a diff to change a[i:ai] into b[j:bj], pump out
// the matching block, and move (i,j) beyond the match
ai, bj, size := m.A, m.B, m.Size
tag := byte(0)
if i < ai && j < bj {
tag = 'r'
} else if i < ai {
tag = 'd'
} else if j < bj {
tag = 'i'
}
if tag > 0 {
opCodes = append(opCodes, OpCode{tag, i, ai, j, bj})
}
i, j = ai+size, bj+size
// the list of matching blocks is terminated by a
// sentinel with size 0
if size > 0 {
opCodes = append(opCodes, OpCode{'e', ai, i, bj, j})
}
}
m.opCodes = opCodes
return m.opCodes
}
// Isolate change clusters by eliminating ranges with no changes.
//
// Return a generator of groups with up to n lines of context.
// Each group is in the same format as returned by GetOpCodes().
func (m *SequenceMatcher) GetGroupedOpCodes(n int) [][]OpCode {
if n < 0 {
n = 3
}
codes := m.GetOpCodes()
if len(codes) == 0 {
codes = []OpCode{OpCode{'e', 0, 1, 0, 1}}
}
// Fixup leading and trailing groups if they show no changes.
if codes[0].Tag == 'e' {
c := codes[0]
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
codes[0] = OpCode{c.Tag, max(i1, i2-n), i2, max(j1, j2-n), j2}
}
if codes[len(codes)-1].Tag == 'e' {
c := codes[len(codes)-1]
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
codes[len(codes)-1] = OpCode{c.Tag, i1, min(i2, i1+n), j1, min(j2, j1+n)}
}
nn := n + n
groups := [][]OpCode{}
group := []OpCode{}
for _, c := range codes {
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
// End the current group and start a new one whenever
// there is a large range with no changes.
if c.Tag == 'e' && i2-i1 > nn {
group = append(group, OpCode{c.Tag, i1, min(i2, i1+n),
j1, min(j2, j1+n)})
groups = append(groups, group)
group = []OpCode{}
i1, j1 = max(i1, i2-n), max(j1, j2-n)
}
group = append(group, OpCode{c.Tag, i1, i2, j1, j2})
}
if len(group) > 0 && !(len(group) == 1 && group[0].Tag == 'e') {
groups = append(groups, group)
}
return groups
}
// Return a measure of the sequences' similarity (float in [0,1]).
//
// Where T is the total number of elements in both sequences, and
// M is the number of matches, this is 2.0*M / T.
// Note that this is 1 if the sequences are identical, and 0 if
// they have nothing in common.
//
// .Ratio() is expensive to compute if you haven't already computed
// .GetMatchingBlocks() or .GetOpCodes(), in which case you may
// want to try .QuickRatio() or .RealQuickRation() first to get an
// upper bound.
func (m *SequenceMatcher) Ratio() float64 {
matches := 0
for _, m := range m.GetMatchingBlocks() {
matches += m.Size
}
return calculateRatio(matches, len(m.a)+len(m.b))
}
// Return an upper bound on ratio() relatively quickly.
//
// This isn't defined beyond that it is an upper bound on .Ratio(), and
// is faster to compute.
func (m *SequenceMatcher) QuickRatio() float64 {
// viewing a and b as multisets, set matches to the cardinality
// of their intersection; this counts the number of matches
// without regard to order, so is clearly an upper bound
if m.fullBCount == nil {
m.fullBCount = map[string]int{}
for _, s := range m.b {
m.fullBCount[s] = m.fullBCount[s] + 1
}
}
// avail[x] is the number of times x appears in 'b' less the
// number of times we've seen it in 'a' so far ... kinda
avail := map[string]int{}
matches := 0
for _, s := range m.a {
n, ok := avail[s]
if !ok {
n = m.fullBCount[s]
}
avail[s] = n - 1
if n > 0 {
matches += 1
}
}
return calculateRatio(matches, len(m.a)+len(m.b))
}
// Return an upper bound on ratio() very quickly.
//
// This isn't defined beyond that it is an upper bound on .Ratio(), and
// is faster to compute than either .Ratio() or .QuickRatio().
func (m *SequenceMatcher) RealQuickRatio() float64 {
la, lb := len(m.a), len(m.b)
return calculateRatio(min(la, lb), la+lb)
}
// Convert range to the "ed" format
func formatRangeUnified(start, stop int) string {
// Per the diff spec at http://www.unix.org/single_unix_specification/
beginning := start + 1 // lines start numbering with one
length := stop - start
if length == 1 {
return fmt.Sprintf("%d", beginning)
}
if length == 0 {
beginning -= 1 // empty ranges begin at line just before the range
}
return fmt.Sprintf("%d,%d", beginning, length)
}
// Unified diff parameters
type UnifiedDiff struct {
A []string // First sequence lines
FromFile string // First file name
FromDate string // First file time
B []string // Second sequence lines
ToFile string // Second file name
ToDate string // Second file time
Eol string // Headers end of line, defaults to LF
Context int // Number of context lines
}
// Compare two sequences of lines; generate the delta as a unified diff.
//
// Unified diffs are a compact way of showing line changes and a few
// lines of context. The number of context lines is set by 'n' which
// defaults to three.
//
// By default, the diff control lines (those with ---, +++, or @@) are
// created with a trailing newline. This is helpful so that inputs
// created from file.readlines() result in diffs that are suitable for
// file.writelines() since both the inputs and outputs have trailing
// newlines.
//
// For inputs that do not have trailing newlines, set the lineterm
// argument to "" so that the output will be uniformly newline free.
//
// The unidiff format normally has a header for filenames and modification
// times. Any or all of these may be specified using strings for
// 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
// The modification times are normally expressed in the ISO 8601 format.
func WriteUnifiedDiff(writer io.Writer, diff UnifiedDiff) error {
buf := bufio.NewWriter(writer)
defer buf.Flush()
wf := func(format string, args ...interface{}) error {
_, err := buf.WriteString(fmt.Sprintf(format, args...))
return err
}
ws := func(s string) error {
_, err := buf.WriteString(s)
return err
}
if len(diff.Eol) == 0 {
diff.Eol = "\n"
}
started := false
m := NewMatcher(diff.A, diff.B)
for _, g := range m.GetGroupedOpCodes(diff.Context) {
if !started {
started = true
fromDate := ""
if len(diff.FromDate) > 0 {
fromDate = "\t" + diff.FromDate
}
toDate := ""
if len(diff.ToDate) > 0 {
toDate = "\t" + diff.ToDate
}
if diff.FromFile != "" || diff.ToFile != "" {
err := wf("--- %s%s%s", diff.FromFile, fromDate, diff.Eol)
if err != nil {
return err
}
err = wf("+++ %s%s%s", diff.ToFile, toDate, diff.Eol)
if err != nil {
return err
}
}
}
first, last := g[0], g[len(g)-1]
range1 := formatRangeUnified(first.I1, last.I2)
range2 := formatRangeUnified(first.J1, last.J2)
if err := wf("@@ -%s +%s @@%s", range1, range2, diff.Eol); err != nil {
return err
}
for _, c := range g {
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
if c.Tag == 'e' {
for _, line := range diff.A[i1:i2] {
if err := ws(" " + line); err != nil {
return err
}
}
continue
}
if c.Tag == 'r' || c.Tag == 'd' {
for _, line := range diff.A[i1:i2] {
if err := ws("-" + line); err != nil {
return err
}
}
}
if c.Tag == 'r' || c.Tag == 'i' {
for _, line := range diff.B[j1:j2] {
if err := ws("+" + line); err != nil {
return err
}
}
}
}
}
return nil
}
// Like WriteUnifiedDiff but returns the diff a string.
func GetUnifiedDiffString(diff UnifiedDiff) (string, error) {
w := &bytes.Buffer{}
err := WriteUnifiedDiff(w, diff)
return string(w.Bytes()), err
}
// Convert range to the "ed" format.
func formatRangeContext(start, stop int) string {
// Per the diff spec at http://www.unix.org/single_unix_specification/
beginning := start + 1 // lines start numbering with one
length := stop - start
if length == 0 {
beginning -= 1 // empty ranges begin at line just before the range
}
if length <= 1 {
return fmt.Sprintf("%d", beginning)
}
return fmt.Sprintf("%d,%d", beginning, beginning+length-1)
}
type ContextDiff UnifiedDiff
// Compare two sequences of lines; generate the delta as a context diff.
//
// Context diffs are a compact way of showing line changes and a few
// lines of context. The number of context lines is set by diff.Context
// which defaults to three.
//
// By default, the diff control lines (those with *** or ---) are
// created with a trailing newline.
//
// For inputs that do not have trailing newlines, set the diff.Eol
// argument to "" so that the output will be uniformly newline free.
//
// The context diff format normally has a header for filenames and
// modification times. Any or all of these may be specified using
// strings for diff.FromFile, diff.ToFile, diff.FromDate, diff.ToDate.
// The modification times are normally expressed in the ISO 8601 format.
// If not specified, the strings default to blanks.
func WriteContextDiff(writer io.Writer, diff ContextDiff) error {
buf := bufio.NewWriter(writer)
defer buf.Flush()
var diffErr error
wf := func(format string, args ...interface{}) {
_, err := buf.WriteString(fmt.Sprintf(format, args...))
if diffErr == nil && err != nil {
diffErr = err
}
}
ws := func(s string) {
_, err := buf.WriteString(s)
if diffErr == nil && err != nil {
diffErr = err
}
}
if len(diff.Eol) == 0 {
diff.Eol = "\n"
}
prefix := map[byte]string{
'i': "+ ",
'd': "- ",
'r': "! ",
'e': " ",
}
started := false
m := NewMatcher(diff.A, diff.B)
for _, g := range m.GetGroupedOpCodes(diff.Context) {
if !started {
started = true
fromDate := ""
if len(diff.FromDate) > 0 {
fromDate = "\t" + diff.FromDate
}
toDate := ""
if len(diff.ToDate) > 0 {
toDate = "\t" + diff.ToDate
}
if diff.FromFile != "" || diff.ToFile != "" {
wf("*** %s%s%s", diff.FromFile, fromDate, diff.Eol)
wf("--- %s%s%s", diff.ToFile, toDate, diff.Eol)
}
}
first, last := g[0], g[len(g)-1]
ws("***************" + diff.Eol)
range1 := formatRangeContext(first.I1, last.I2)
wf("*** %s ****%s", range1, diff.Eol)
for _, c := range g {
if c.Tag == 'r' || c.Tag == 'd' {
for _, cc := range g {
if cc.Tag == 'i' {
continue
}
for _, line := range diff.A[cc.I1:cc.I2] {
ws(prefix[cc.Tag] + line)
}
}
break
}
}
range2 := formatRangeContext(first.J1, last.J2)
wf("--- %s ----%s", range2, diff.Eol)
for _, c := range g {
if c.Tag == 'r' || c.Tag == 'i' {
for _, cc := range g {
if cc.Tag == 'd' {
continue
}
for _, line := range diff.B[cc.J1:cc.J2] {
ws(prefix[cc.Tag] + line)
}
}
break
}
}
}
return diffErr
}
// Like WriteContextDiff but returns the diff a string.
func GetContextDiffString(diff ContextDiff) (string, error) {
w := &bytes.Buffer{}
err := WriteContextDiff(w, diff)
return string(w.Bytes()), err
}
// Split a string on "\n" while preserving them. The output can be used
// as input for UnifiedDiff and ContextDiff structures.
func SplitLines(s string) []string {
lines := strings.SplitAfter(s, "\n")
lines[len(lines)-1] += "\n"
return lines
}

27
vendor/github.com/rs/xid/.appveyor.yml generated vendored Normal file
View File

@@ -0,0 +1,27 @@
version: 1.0.0.{build}
platform: x64
branches:
only:
- master
clone_folder: c:\gopath\src\github.com\rs\xid
environment:
GOPATH: c:\gopath
install:
- echo %PATH%
- echo %GOPATH%
- set PATH=%GOPATH%\bin;c:\go\bin;%PATH%
- go version
- go env
- go get -t .
build_script:
- go build
test_script:
- go test

8
vendor/github.com/rs/xid/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,8 @@
language: go
go:
- "1.9"
- "1.10"
- "master"
matrix:
allow_failures:
- go: "master"

19
vendor/github.com/rs/xid/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Copyright (c) 2015 Olivier Poitrey <rs@dailymotion.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished
to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

116
vendor/github.com/rs/xid/README.md generated vendored Normal file
View File

@@ -0,0 +1,116 @@
# Globally Unique ID Generator
[![godoc](http://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/rs/xid) [![license](http://img.shields.io/badge/license-MIT-red.svg?style=flat)](https://raw.githubusercontent.com/rs/xid/master/LICENSE) [![Build Status](https://travis-ci.org/rs/xid.svg?branch=master)](https://travis-ci.org/rs/xid) [![Coverage](http://gocover.io/_badge/github.com/rs/xid)](http://gocover.io/github.com/rs/xid)
Package xid is a globally unique id generator library, ready to safely be used directly in your server code.
Xid uses the Mongo Object ID algorithm to generate globally unique ids with a different serialization (base64) to make it shorter when transported as a string:
https://docs.mongodb.org/manual/reference/object-id/
- 4-byte value representing the seconds since the Unix epoch,
- 3-byte machine identifier,
- 2-byte process id, and
- 3-byte counter, starting with a random value.
The binary representation of the id is compatible with Mongo 12 bytes Object IDs.
The string representation is using base32 hex (w/o padding) for better space efficiency
when stored in that form (20 bytes). The hex variant of base32 is used to retain the
sortable property of the id.
Xid doesn't use base64 because case sensitivity and the 2 non alphanum chars may be an
issue when transported as a string between various systems. Base36 wasn't retained either
because 1/ it's not standard 2/ the resulting size is not predictable (not bit aligned)
and 3/ it would not remain sortable. To validate a base32 `xid`, expect a 20 chars long,
all lowercase sequence of `a` to `v` letters and `0` to `9` numbers (`[0-9a-v]{20}`).
UUIDs are 16 bytes (128 bits) and 36 chars as string representation. Twitter Snowflake
ids are 8 bytes (64 bits) but require machine/data-center configuration and/or central
generator servers. xid stands in between with 12 bytes (96 bits) and a more compact
URL-safe string representation (20 chars). No configuration or central generator server
is required so it can be used directly in server's code.
| Name | Binary Size | String Size | Features
|-------------|-------------|----------------|----------------
| [UUID] | 16 bytes | 36 chars | configuration free, not sortable
| [shortuuid] | 16 bytes | 22 chars | configuration free, not sortable
| [Snowflake] | 8 bytes | up to 20 chars | needs machine/DC configuration, needs central server, sortable
| [MongoID] | 12 bytes | 24 chars | configuration free, sortable
| xid | 12 bytes | 20 chars | configuration free, sortable
[UUID]: https://en.wikipedia.org/wiki/Universally_unique_identifier
[shortuuid]: https://github.com/stochastic-technologies/shortuuid
[Snowflake]: https://blog.twitter.com/2010/announcing-snowflake
[MongoID]: https://docs.mongodb.org/manual/reference/object-id/
Features:
- Size: 12 bytes (96 bits), smaller than UUID, larger than snowflake
- Base32 hex encoded by default (20 chars when transported as printable string, still sortable)
- Non configured, you don't need set a unique machine and/or data center id
- K-ordered
- Embedded time with 1 second precision
- Unicity guaranteed for 16,777,216 (24 bits) unique ids per second and per host/process
- Lock-free (i.e.: unlike UUIDv1 and v2)
Best used with [zerolog](https://github.com/rs/zerolog)'s
[RequestIDHandler](https://godoc.org/github.com/rs/zerolog/hlog#RequestIDHandler).
Notes:
- Xid is dependent on the system time, a monotonic counter and so is not cryptographically secure. If unpredictability of IDs is important, you should not use Xids. It is worth noting that most other UUID-like implementations are also not cryptographically secure. You should use libraries that rely on cryptographically secure sources (like /dev/urandom on unix, crypto/rand in golang), if you want a truly random ID generator.
References:
- http://www.slideshare.net/davegardnerisme/unique-id-generation-in-distributed-systems
- https://en.wikipedia.org/wiki/Universally_unique_identifier
- https://blog.twitter.com/2010/announcing-snowflake
- Python port by [Graham Abbott](https://github.com/graham): https://github.com/graham/python_xid
- Scala port by [Egor Kolotaev](https://github.com/kolotaev): https://github.com/kolotaev/ride
- Rust port by [Jérôme Renard](https://github.com/jeromer/): https://github.com/jeromer/libxid
- Ruby port by [Valar](https://github.com/valarpirai/): https://github.com/valarpirai/ruby_xid
- Java port by [0xShamil](https://github.com/0xShamil/): https://github.com/0xShamil/java-xid
- Dart port by [Peter Bwire](https://github.com/pitabwire): https://pub.dev/packages/xid
## Install
go get github.com/rs/xid
## Usage
```go
guid := xid.New()
println(guid.String())
// Output: 9m4e2mr0ui3e8a215n4g
```
Get `xid` embedded info:
```go
guid.Machine()
guid.Pid()
guid.Time()
guid.Counter()
```
## Benchmark
Benchmark against Go [Maxim Bublis](https://github.com/satori)'s [UUID](https://github.com/satori/go.uuid).
```
BenchmarkXID 20000000 91.1 ns/op 32 B/op 1 allocs/op
BenchmarkXID-2 20000000 55.9 ns/op 32 B/op 1 allocs/op
BenchmarkXID-4 50000000 32.3 ns/op 32 B/op 1 allocs/op
BenchmarkUUIDv1 10000000 204 ns/op 48 B/op 1 allocs/op
BenchmarkUUIDv1-2 10000000 160 ns/op 48 B/op 1 allocs/op
BenchmarkUUIDv1-4 10000000 195 ns/op 48 B/op 1 allocs/op
BenchmarkUUIDv4 1000000 1503 ns/op 64 B/op 2 allocs/op
BenchmarkUUIDv4-2 1000000 1427 ns/op 64 B/op 2 allocs/op
BenchmarkUUIDv4-4 1000000 1452 ns/op 64 B/op 2 allocs/op
```
Note: UUIDv1 requires a global lock, hence the performance degradation as we add more CPUs.
## Licenses
All source code is licensed under the [MIT License](https://raw.github.com/rs/xid/master/LICENSE).

11
vendor/github.com/rs/xid/error.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
package xid
const (
// ErrInvalidID is returned when trying to unmarshal an invalid ID.
ErrInvalidID strErr = "xid: invalid ID"
)
// strErr allows declaring errors as constants.
type strErr string
func (err strErr) Error() string { return string(err) }

9
vendor/github.com/rs/xid/hostid_darwin.go generated vendored Normal file
View File

@@ -0,0 +1,9 @@
// +build darwin
package xid
import "syscall"
func readPlatformMachineID() (string, error) {
return syscall.Sysctl("kern.uuid")
}

Some files were not shown because too many files have changed in this diff Show More