Skip to main content Link Menu Expand (external link) Document Search Copy Copied

TiDB Setup

  1. Install TiDB
  2. Scale the TiDB cluster
  3. Customize the configuration of monitoring service
  4. Run benchmark on TiDB

Install TiDB

Since the official TiDB documentation is well organized, we just skip the installation process and focus on the configuration of TiDB. We strongly recommend readers to follow the official documentation to install TiDB.

Note: TiDB has a built-in monitoring service, including Prometheus and Grafana so we don’t need to install them here.

Scale the TiDB cluster

The official documentation also provides a well-written guide on how to scale the TiDB cluster. We just provide a brief summary here. Readers can refer to official documentation for more details.

The default setting of TiDB only contains one pd, one tidb, and one tikv. In our experiment, we scale out the cluster to 3 tikv to simulate a more realistic scenario. There are two ways to scale out the cluster:

  1. Directly edit the config file of the cluster:
    kubectl edit tc ${tidb_cluster_name{} -n ${tidb_namespace}
    

    2.In one-line command

     # set the tikv's replicate as 3
     kubectl patch -n ${tidb_namespace} tc ${tidb_cluster_name} --type merge --patch '{"spec":{"tikv":{"replicas":3}}}' 
    

Customize the configuration of monitoring service

Reader can refer to k8s documentation and TiDB documentation for more details. Here we just provide a brief guide.

Step 1: Create external-configMap.yml according to the official template. You can find our example at here.

Note: In this config file you can customize the configuration of Prometheus and Grafana. For example, you can change the retention time of Prometheus data, the refresh rate of Grafana dashboard, etc. We set the scrape_interval from 15s to 1s to make the monitoring service more responsive. You can compare the difference between the official template and our configuration.

Step 2: Copy the tidb-monitor.yml. You can also find it in our GitHub repository

Step 3: Create external configMap by running the following command:

kubectl apply -f external-configMap.yml -n ${namespace}

Step 4: Install tidb monitor by running the following command:

kubectl apply -f tidb-monitor.yaml -n ${namespace}

Wait for cluster Pods ready:

watch kubectl -n ${namespace} get pod

Step 5: Access the Prometheus dashboard by running the following command:

kubectl port-forward -n tidb-cluster svc/basic-prometheus 9090 --address 0.0.0.0 

Then you can access the Prometheus dashboard by visiting http://localhost:9090, and check whether the configuration is correct.

Run benchmark on TiDB

We use TPC-C to simulate the workload of a real-world OLTP database. Readers can refer to this site for more details.

Step 1: Install TiUP

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

Step 2: Install TPC-C

tiup install bench

Step 3: Load Data

tiup bench tpcc -P 14000 -D tpcc --warehouses 4 prepare -T 16

Step 4: Run benchmark

while :
do
   tiup bench tpcc -P 14000 -D tpcc --warehouses 4 run -T 16
done