Skip to content

Commit

Permalink
update docs (#454)
Browse files Browse the repository at this point in the history
  • Loading branch information
MegaByte875 authored Feb 27, 2024
1 parent baf11b4 commit d9f5d39
Show file tree
Hide file tree
Showing 6 changed files with 189 additions and 71 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,7 @@ nebula-storaged-2 1/1 Running 0 19s
- [PV reclaim](doc/user/pv_reclaim.md)
- [PV expansion](doc/user/pv_expansion.md)
- [mTLS](doc/user/ssl_guide.md)
- [Specified cluster](doc/user/specified_cluster.md)
- [Security context](doc/user/security_context.md)
- [ngctl](doc/user/ngctl_guide.md)
- [nebula-console](doc/user/nebula_console.md)
Expand Down
21 changes: 0 additions & 21 deletions doc/user/add-ons.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,24 +36,3 @@ Kubernetes and provides more powerful and efficient features for managing applic
even images on Node.

Refer to the [openkruise installation documentation](https://openkruise.io/docs/installation) to get started.

## sig-storage-local-static-provisioner

**Note:**
It is only used in the scenario that you deploy NebulaGraph with local storage, it's not necessary.

[local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) manages the
PersistentVolume lifecycle for pre-allocated disks by detecting and creating PVs for each local disk on the host, and
cleaning up the disks when released. It does not support dynamic provisioning.

Follow
the [getting started guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/getting-started.md)
to deploy local-volume-provisioner to provision local volumes.

Follow
the [best practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md)
for more information on local PV in Kubernetes.

Follow
the [mount disks](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs)
to mount the disk.
198 changes: 152 additions & 46 deletions doc/user/br_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,81 +7,180 @@
* Operator version >= 1.4.0
* Set the field `enableBR` ot __true__
* Sufficient computational resources can be scheduled to restore NebulaGraph cluster(only restore scenario needed)
* S3 protocol compatible storage (AWS S3,Minio, etc.)
* GCS、S3 credentials

## Backup cluster

#### Features:

* Support full, incremental backup
* Support GCS、S3 protocol compatible storage (AWS S3,Minio, etc.)
* Support the cleanup policy of expired backup
* Support cron backup and can be paused

#### Backup

The fields in the table is optional.

| Parameter | Description | Default |
|:---------------------|:--------------------------------------------------------------------------|:---------|
| `image` | backup container image without tag, and use `version` as tag | `` |
| `nebula.version` | backup image tag | `` |
| `imagePullPolicy` | backup image pull policy | `Always` |
| `imagePullSecrets` | The secret to use for pulling the images | `[]` |
| `env` | backup container environment variables | `[]` |
| `resources` | backup pod resources | `{}` |
| `nodeSelector` | backup pod nodeSelector | `{}` |
| `tolerations` | backup pod tolerations | `[]` |
| `affinity` | backup pod affinity | `{}` |
| `initContainers` | backup pod init containers | `[]` |
| `sidecarContainers` | backup pod sidecar containers | `[]` |
| `volumes` | backup pod volumes | `[]` |
| `volumeMounts` | backup pod volume mounts | `[]` |
| `cleanBackupData` | Whether to clean backup data when the object is deleted from the cluster | `false` |
| `autoRemoveFinished` | The job that status is failed and completed will be removed automatically | `false` |
| `config` | backup cluster config | `{}` |

Here is the [nebulabackup-gs.yaml](../../config/samples/nebulabackup-gs.yaml) example:

## Backup NebulaGraph cluster

#### Full backup

Update the [full-backup-job.yaml](../../config/samples/full-backup-job.yaml) parameters:

* $META_ADDRESS
* $BUCKET
* $ACCESS_KEY
* $SECRET_KEY
* $REGION
```yaml
apiVersion: v1
kind: Secret
metadata:
name: gcp-secret
type: Opaque
data:
credentials: <GOOGLE_APPLICATION_CREDENTIALS_JSON>
---
apiVersion: apps.nebula-graph.io/v1alpha1
kind: NebulaBackup
metadata:
name: nb1024
spec:
image: reg.vesoft-inc.com/cloud-dev/br-ent
version: v3.7.0
resources:
limits:
cpu: "200m"
memory: 300Mi
requests:
cpu: 100m
memory: 200Mi
imagePullSecrets:
- name: nebula-image
# The job that status is failed and completed will be removed automatically.
autoRemoveFinished: true
# CleanBackupData denotes whether to clean backup data when the object is deleted from the cluster,
# if not set, the backup data will be retained
cleanBackupData: true
config:
# The name of the backup/restore nebula cluster
clusterName: nebula
gs:
# Location in which the gs bucket is located.
location: "us-central1"
# Bucket in which to store the backup data.
bucket: "nebula-test"
# SecretName is the name of secret which stores google application credentials.
# Secret key: credentials
secretName: "gcp-secret"
```
```shell
$ kubectl apply -f full-backup-job.yaml
$ kubectl describe job nebula-full-backup
# Pod name is shown under "Events"
$ kubectl logs logs $POD -f
$ kubectl apply -f nebulabackup-gs.yaml
$ kubectl get nb nb1024
NAME TYPE BACKUP STATUS STARTED COMPLETED AGE
nb1024 full BACKUP_2024_02_26_08_05_13 Complete 71s 1s 71s
```

#### Incremental backup
#### Cron backup

Update the [incremental-backup-job.yaml](../../config/samples/incremental-backup-job.yaml) parameters:
Here is the [cronbackup.yaml](../../config/samples/cronbackup.yaml) example:

* $META_ADDRESS
* $BUCKET
* $ACCESS_KEY
* $SECRET_KEY
* $REGION
```yaml
apiVersion: apps.nebula-graph.io/v1alpha1
kind: NebulaCronBackup
metadata:
name: cron123
spec:
# The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron
schedule: "*/5 * * * *"
# MaxReservedTime is to specify how long backups we want to keep.
# It should be a duration string format.
maxReservedTime: 60m
# Specifies the backup that will be created when executing a CronBackup.
backupTemplate:
image: reg.vesoft-inc.com/cloud-dev/br-ent
version: v3.7.0
resources:
limits:
cpu: "200m"
memory: 300Mi
requests:
cpu: 100m
memory: 200Mi
imagePullSecrets:
- name: nebula-image
autoRemoveFinished: true
cleanBackupData: true
config:
clusterName: nebula
gs:
location: "us-central1"
bucket: "nebula-test"
secretName: "gcp-secret"
```
```shell
$ kubectl apply -f incremental-backup-job.yaml
$ kubectl describe job nebula-incr-backup
# Pod name is shown under "Events"
$ kubectl logs logs $POD -f
$ kubectl apply -f cronjob.yaml
$ kubectl get ncb
NAME SCHEDULE LASTBACKUP LASTSCHEDULETIME LASTSUCCESSFULTIME BACKUPCLEANTIME AGE
cron123 */5 * * * * cron123-20240228t102500 2m40s 64s 54s 45m

$ kubectl get nb -l "apps.nebula-graph.io/cron-backup=cron123"
NAME TYPE BACKUP STATUS STARTED COMPLETED AGE
cron123-20240228t094500 full BACKUP_2024_02_28_09_45_01 Complete 42m 41m 42m
cron123-20240228t102500 full BACKUP_2024_02_28_10_26_08 Complete 85s 55s 85s
```

## Restore NebulaGraph cluster
## Restore cluster

The restore flow:
The fields in the table is optional.

![avatar](../pictures/restore.png)
| Parameter | Description | Default |
|:---------------------|:-------------------------------------------------------------------|:---------|
| `nodeSelector` | restored nebula cluster nodeSelector | `{}` |
| `tolerations` | restored nebula cluster tolerations | `[]` |
| `affinity` | restored nebula cluster affinity | `{}` |
| `autoRemoveFailed` | The nebula cluster will be removed automatically if restore failed | `false` |
| `config` | backup cluster config | `{}` |

Update the [apps_v1alpha1_nebularestore.yaml](../../config/samples/nebularestore.yaml) fields:

* clusterName
* backupName
* concurrency
* S3 storage sections
* secret aws-s3-secret data access-key and secret-key
Here is the [nebularestore-s3.yaml](../../config/samples/nebularestore-s3.yaml) example:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-s3-secret
name: aws-secret
type: Opaque
data:
access-key: <ACCESS_KEY>
secret-key: <SECRET_KEY>
access_key: <ACCESS_KEY>
secret_key: <SECRET_KEY>
---
apiVersion: apps.nebula-graph.io/v1alpha1
kind: NebulaRestore
metadata:
name: restore1
name: nr2048
spec:
config:
# The name of the restore nebula cluster
# The name of the backup/restore nebula cluster
clusterName: nebula
# The name of the backup file.
backupName: "BACKUP_2023_02_05_04_36_41"
backupName: "BACKUP_2023_02_28_04_36_41"
# Used to control the number of concurrent file downloads during data restoration.
# The default value is 5.
concurrency: 5
concurrency: 50
s3:
# Region in which the S3 compatible bucket is located.
region: "us-east-1"
Expand All @@ -90,12 +189,19 @@ spec:
# Endpoint of S3 compatible storage service
endpoint: "https://s3.us-east-1.amazonaws.com"
# SecretName is the name of secret which stores access key and secret key.
secretName: "aws-s3-secret"
secretName: "aws-secret"
```
```shell
$ kubectl apply -f apps_v1alpha1_nebularestore.yaml
$ kubectl get nr restore1 -w
$ kubectl apply -f nebularestore-s3.yaml
$ kubectl get nr nr2048
NAME STATUS RESTORED-CLUSTER STARTED COMPLETED AGE
nr2048 Complete ng6tdt 14m 6m55s 14m

$ kubectl get nc
NAME READY GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula True 1 1 3 3 3 3 45m
ng6tdt True 1 1 3 3 3 3 10m
```

**Note:**
Expand Down
22 changes: 22 additions & 0 deletions doc/user/specified_cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
## Specified cluster

You can restrict the management scope of the Nebula cluster by specifying the namespace or cluster selector.
This feature is applicable to scenarios such as operator grayscale release or management cluster splitting.

You can specify the cluster through the flags of the controller-manager.
```shell
--nebula-object-selector string nebula object selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2).

--watch-namespaces strings Namespaces restricts the controller watches for updates to Kubernetes objects. If empty, all namespaces are watched. Multiple namespaces seperated by comma.(e.g. ns1,ns2,ns3).
```

Configuration steps:
Modify the parameters watchNamespaces or nebulaObjectSelector when deploying operator by helm chart.
```yaml
# Namespaces restricts the controller-manager watches for updates to Kubernetes objects. If empty, all namespaces are watched.
# e.g. ns1,ns2,ns3
watchNamespaces: ""

# nebula object selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2).
nebulaObjectSelector: ""
```
3 changes: 3 additions & 0 deletions doc/user/ssl_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,9 @@ sslCerts:
caCert: ""
# InsecureSkipVerify controls whether a client verifies the server's certificate chain and host name
insecureSkipVerify: false
# ServerName is used to verify the hostname on the returned certificates unless InsecureSkipVerify is given.
# It is also included in the client's handshake to support virtual hosting unless it is an IP address.
serverName: ""
# AutoMountServerCerts controls whether operator mounts server's certificate from secret.
autoMountServerCerts: false
```
Expand Down
15 changes: 11 additions & 4 deletions doc/user/webhook.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ metadata:
app.kubernetes.io/instance: nebula-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nebula-operator
app.kubernetes.io/version: 1.7.0
helm.sh/chart: nebula-operator-1.7.0
app.kubernetes.io/version: 1.8.0
helm.sh/chart: nebula-operator-1.8.0
name: nebula-operator-webhook-issuer
namespace: default
resourceVersion: "109935202"
Expand Down Expand Up @@ -66,8 +66,8 @@ metadata:
app.kubernetes.io/instance: nebula-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nebula-operator
app.kubernetes.io/version: 1.7.0
helm.sh/chart: nebula-operator-1.7.0
app.kubernetes.io/version: 1.8.0
helm.sh/chart: nebula-operator-1.8.0
name: nebula-operator-webhook-cert
namespace: default
resourceVersion: "109935196"
Expand Down Expand Up @@ -120,4 +120,11 @@ HA mode
$ kubectl annotate nc nebula nebula-graph.io/ha-mode=true
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"graphd": {"replicas":1}}}'
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" denied the request: spec.graphd.replicas: Invalid value: 1: should be at least 2 in HA mode
```
Deletion protection
```shell
$ kubectl annotate nc nebula -n nebula-test nebula-graph.io/delete-protection=true
$ kubectl delete sc nebula -n nebula-test
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" denied the request: metadata.annotations[nebula-graph.io/delete-protection]: Forbidden: protected cluster cannot be deleted
```

0 comments on commit d9f5d39

Please sign in to comment.