各种环境配置

本文最后更新于:5 天前

一切安装启动的应用都要开启防火墙 TCP 端口和云服务商安全组规则才能访问。

虚拟机

  • 静态网络配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=no
IPV6_DEFROUTE=no
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=46db2a62-5e68-452b-8333-c8ff3386779e
DEVICE=ens160
ONBOOT=yes
IPADDR=192.168.88.221
NETMASK=255.255.255.0
GATEWAY=192.168.88.254
DNS1=192.168.88.102
DNS2=8.8.8.8

Java

安装 jdk:

1
yum install java-1.8.0-openjdk

查询 jdk 路径和名字:

1
which jdk

环境变量设置(/etc/profile):

1
2
3
4
5
#set java environment
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.392.b08-2.el7_9.x86_64/jre
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

刷新系统环境变量配置:

1
source /etc/profile

Mysql

本机部署

CentOS7下安装mysql5.7

  1. 新建安装目录
1
2
mkdir /opt/software
cd /opt/software
  1. 检查是否已安装 MySQL 服务:
1
2
3
rpm -qa | grep mysql

yum list installed | grep mysql

如果已安装则删除 MySQL 及其依赖的包:

1
yum -y remove mysql-libs.x86_64
  1. 安装YUM源
1
2
wget http://repo.mysql.com/mysql57-community-release-el7-8.noarch.rpm
rpm -ivh mysql57-community-release-el7-8.noarch.rpm
  1. 安装MySQL:
1
2
yum install mysql-server -y
yum install mysql-server --nogpgcheck -y
  1. 启动MySQL:
1
systemctl start mysqld #启动MySQL
  1. 获取临时默认密码:
1
grep 'temporary password' /var/log/mysqld.log
  1. 修改密码:
1
mysql -u root -p

修改密码验证策略(简单策略或者调整验证条件):

1
2
3
4
5
6
7
# 弱校验模式:set global validate_password_policy=0;
# 一般校验模式:set global validate_password_policy=1;
show global variables like "validate_pass%";
# 设置无特殊字符限制,数字限制等
set global validate_password_special_char_count=0;
set global validate_password_mixed_case_count=0;
set global validate_password_number_count=0;

修改密码:

1
2
3
4
5
alter user 'root'@'localhost' identified by '@test654321';
# 新建并授权新用户任意ip可连接
grant all privileges on *.* to root@"%" identified by '1234560' with grant option;
# 刷新权限
flush privileges;
  1. 查询用户:
1
select Host,User,authentication_string from user;
  1. 开放数据库防火墙端口:
1
2
3
firewall-cmd --zone=public --add-port=3306/tcp --permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-ports
  1. 系统配置:
1
2
3
4
5
6
7
8
9
10
11
12
ulimit -n 102400
 
修改/etc/sysctl.conf
增加以下内容:
vm.overcommit_memory = 1
vm.max_map_count=655360
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_timestamps=1
net.ipv4.ip_forward=1
 
保存后执行:
sysctl -p

Docker 部署

  1. 拉取 mysql 镜像
1
docker pull mysql:5.7
  1. 配置外部文件用于卷挂载
1
2
3
mkdir -p /data/mysql-1/conf #-p:表示若无主目录则自动创建
mkdir -p /data/mysql-1/data
mkdir -p /data/mysql-1/log
  1. 添加自定义配置文件 my.cnf主从模式,也可只部署一个 mysql,去掉主从相关的配置文件内容即可
    主机配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[mysqld]  
## 同一局域网内注意要唯一 ,默认server-id=1,主从配置时主和从库必须不同
server-id=1
## 开启二进制日志功能,可以随便取(主从配置关键)
log-bin=master-bin
## 二级制日志格式,有三种 row,statement,mixed
binlog-format=ROW
## 同步的数据库名称,如果不配置,表示同步所有的库
binlog-do-db=数据库名
port=3306
user=mysql
character-set-server=utf8
default_authentication_plugin=mysql_native_password
secure_file_priv=/var/lib/mysql
expire_logs_days=7
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
max_connections=1000
lower_case_table_names = 0
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8

从机配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[mysqld]
## 设置server_id,注意要唯一
server-id=2
## 开启二进制日志功能,以备Slave作为其它Slave的Master时使用
log-bin=mysql-slave-bin
## relay_log配置中继日志
relay_log=mysql-relay-bin
read_only=1 ## 设置为只读,该项如果不设置,表示slave可读可写
port=3307
user=mysql
character-set-server=utf8
default_authentication_plugin=mysql_native_password
secure_file_priv=/var/lib/mysql
expire_logs_days=7
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
max_connections=1000
lower_case_table_names = 0
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
  1. 根据镜像和外部待挂载文件启动 mysql 容器实例
1
docker run -id --name mysql-2 -p 3307:3307  -v /usr/local/docker/data/mysql2/conf/my.cnf:/etc/mysql/my.cnf -v /usr/local/docker/data/mysql2/data:/var/lib/mysql -v /usr/local/docker/data/mysql2/log:/var/log/mysql -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_TCP_PORT=3307 --restart=always --privileged=true mysql:5.7

此 docker run 命令的功能是:

  1. –name 指定容器名称为 mysql-2
  2. -p 映射主机 3307 端口到容器 3307 端口,外部访问时用主机 3307 端口
  3. -v 挂载配置文件,-v /data/mysql2/conf/my.cnf:/etc/mysql/my.cnf
  4. -v 挂载数据文件,-v /data/mysql2/data:/var/lib/mysql
  5. -v 挂载日志文件,-v /data/mysql2/log:/var/log/mysql
  6. -e 设置环境变量 MYSQL_ROOT_PASSWORD 指定 root 用户密码为 123456
  7. –restart=always 设置容器重启策略为总是重启
  8. –privileged=true 使用特权模式运行容器
  9. -d 后台方式运行容器
  10. mysql:5.7 使用 MYSQL:5.7 镜像
    整体功能是:
    运行一个 MYSQL:5.7 容器,名称为 mysql-2,映射端口 3307,并将配置文件、数据文件和日志文件分别挂载为主机目录,设置 root 密码并开启特权模式和自动重启,可以实现数据持久化,并通过 3307 端口来管理该 mysql 服务。
  11. 修改 root 用户远程访问权限
1
2
alter user 'root'@'%' identified with mysql_native_password by '123456'; #修改为任何ip可远程访问
flush privileges; #刷新权限
  1. 查看是否启动成功容器,并且支持远程访问


主从搭建

参考:

配置主库

  1. 修改配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[mysqld]
# 基础配置
ssl=0
#关闭SSL连接
character-set-server=utf8mb4
#设置默认字符集为utf8mb4
log-bin=/var/lib/mysql/mysql-bin.log
#二进制日志文件的存储路径
server-id=100
#唯一标识
port=3306
#MySQL服务监听的端口号
max_allowed_packet=1024M
#客户端和服务器间可以传输的最大数据包大小
ft_min_word_len=2
#全文搜索设置最小词长
slow_query_log=ON
#开启慢查询日志
slow_query_log_file=/var/lib/mysql/slow.log
#慢查询日志存储路径
max_connections = 40960
#同时处理的最大客户端连接数
max_connect_errors = 6000
#断开客户端连接前,允许的最大连接错误次数
read_buffer_size = 1M
#读取操作的缓存大小
innodb_buffer_pool_size = 6144M
#引擎缓存池大小
innodb_io_capacity = 1000
#存储引擎的I/O能力
default-time-zone = '+8:00'
#默认时区为UTC+8
lower_case_table_names=1
#所有表名都以小写存储
wait_timeout=288000
#非交互连接的超时时间,单位为秒
interactive_timeout=288000
#交互连接的超时时间
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
# SQL模式

# 主从配置-主库配置
## 复制过滤:也就是指定哪个数据库不用同步(mysql库一般不同步)
binlog-ignore-db=mysql
binlog-ignore-db=information_schema
binlog-ignore-db=performation_schema
binlog-ignore-db=sys
## 为每个session分配的内存,在事务过程中用来存储二进制日志的缓存
binlog_cache_size=1M
## 主从复制的格式(mixed,statement,row,默认格式是statement)
binlog_format=mixed
## 二进制日志自动删除/过期的天数。默认值为0,表示不自动删除。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062

#主要是为了使用带事务的InnoDB进行复制设置时尽可能提高持久性和一致性
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
  1. 创建从库用户
1
2
3
create user 'slave'@'%' identified by '123456';
grant all privileges on *.* to 'slave'@'%';
flush privileges;
  1. 查看主库信息
1
show master status;

配置从库

  1. 修改配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[mysqld]
ssl=0
character-set-server=utf8mb4
server-id=101
port=3306
max_allowed_packet=1024M
ft_min_word_len=2
slow_query_log=ON
slow_query_log_file=/var/lib/mysql/slow.log
max_connections = 40960
max_connect_errors = 6000
read_buffer_size = 1M
innodb_buffer_pool_size = 6144M
innodb_io_capacity = 1000
default-time-zone = '+8:00'
lower_case_table_names=1
wait_timeout=288000
interactive_timeout=288000
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
# 复制过滤:也就是指定哪个数据库不用同步(mysql库一般不同步)
binlog-ignore-db=mysql
binlog-ignore-db=information_schema
binlog-ignore-db=performation_schema
binlog-ignore-db=sys
## 开启二进制日志功能,可以随便取,最好有含义(关键就是这里了)
log-bin=mysql-bin
## 为每个session分配的内存,在事务过程中用来存储二进制日志的缓存
binlog_cache_size=1M
## 主从复制的格式(mixed,statement,row,默认格式是statement)
binlog_format=mixed
## 二进制日志自动删除/过期的天数。默认值为0,表示不自动删除。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
## relay_log配置中继日志
relay_log=replicas-mysql-relay-bin
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1
## 防止改变数据(除了特殊的线程)
read_only=1
  1. 重启MySQL:
1
service mysqld restart
  1. 连接主库(mysql8需要加上get_master_public_key=刚才设置的server-id):
1
2
3
4
5
## master_user:刚才创建的用户 master_password:密码 master_host主机ip master_log_file刚才让记住的file信息 master_log_pos:刚才让记住的Position信息
change master to master_user='slave',master_password='123456',master_host='192.168.19.100',master_log_file='mysql-bin.000003',master_log_pos=1497;
## 设置忽略错误继续执行
set global sql_slave_skip_counter=1;

  1. 开启从节点:
1
start slave;
  1. 查看状态:
1
show slave status;

如下显示YES状态即成功。

|400

同步故障

若出错,则清理掉之前的配置,执行以下命令:

1
2
stop slave;
reset slave all;

参考:MySQL同步故障:” Slave_SQL_Running:No” 两种解决办法-CSDN博客

Mongodb

  1. windows 下载安装 MongoDB
    Download MongoDB Community Server | MongoDB 下载需要的版本的 mongodb 并安装

    |500

    |500

  2. 创建 MongoDB 所需文件
    在安装时指定的数据和日志目录下分别新建 db 文件夹和 mongodb.log 文件

  3. 配置环境变量
    在 Powershell 中执行以下命令添加 mongodb 用户环境变量:

1
2
[System.Environment]::SetEnvironmentVariable("PATH",$env:path+";C:\Program Files\MongoDB\Server\7.0\bin",[System.EnvironmentVariableTarget]::User) #设置用户环境变量
echo %Path% #刷新环境变量
  1. 修改 mongod.cfg 配置
    修改 mongod.cfg 关于数据和日志存储的相关配置:
1
2
3
4
5
6
7
8
9
10
11
12
# Where and how to store data.
storage:
dbPath: E:\MongoDB\Server\7.0\data\db
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: E:\MongoDB\Server\7.0\log\mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
  1. 安装 windows 服务,开机自启动 mongodb
1
2
3
mongod --dbpath "E:\MongoDB\Server\5.0\data\db" --logpath "E:\MongoDB\Server\5.0\log\mongod.log" --install --serviceName "MongoDB5.3" --serviceDisplayName "MongoDB5.3" #安装低版本mongodb服务自启动,需要管理员身份打开命令行
mongod --config "C:\Program Files\MongoDB\Server\7.0\bin\mongod.cfg" #手动启动不安装服务自启动
mongod --config "C:\Program Files\MongoDB\Server\7.0\bin\mongod.cfg" --install --serviceName "MongoDB" --serviceDisplayName "MongoDB" #安装windows服务开机自启动,可以设置手动模式,通过 net start MongoDB 命令手动启动

|600


Nginx

Docker 部署 Nginx

拉取镜像

1
2
docker pull nginx
docker pull nginx:1.23 #指定版本为1.23

伪启动

临时启动 nginx 容器,复制配置文件到主机

切换到 /home/ecs-user/mdware 目录下,创建 nginx 目录,在 nginx 目录中保存配置文件相关信息。

1
2
3
4
5
6
mkdir nginx
docker run --name nginx -p 80:80 -d nginx
docker cp nginx:/etc/nginx/nginx.conf /home/ecs-user/mdware/nginx/
docker cp nginx:/etc/nginx/conf.d/ /home/ecs-user/mdware/nginx/conf/
docker cp nginx:/usr/share/nginx/html/ /home/ecs-user/mdware/nginx/html/
docker cp nginx:/var/log/nginx/ /home/ecs-user/mdware/nginx/logs/

停止删除临时 nginx 容器

1
2
docker stop nginx
docker rm nginx

实启动

1
2
3
4
5
6
7
8
9
docker run -p 8081:80 \
-v /home/ecs-user/mdware/nginx/nginx.conf:/etc/nginx/nginx.conf \
-v /home/ecs-user/mdware/nginx/logs:/var/log/nginx \
-v /home/ecs-user/mdware/nginx/html:/usr/share/nginx/html \
-v /home/ecs-user/mdware/nginx/conf:/etc/nginx/conf.d \
-v /etc/localtime:/etc/localtime \
--name nginx \
--restart=always \
-d nginx

验证 nginx 是否运行成功:


Redis

要安装和部署 Redis,你可以按照以下步骤进行操作:

  • 官网地址: https:/redis.io
  • GitHub Windows 版本维护地址: https:/github.com/tporadowski/redis/releases

要在 Windows 上安装和使用 Redis,您可以按照以下步骤操作:

  1. 从 Redis 官方网站( https://redis.io/download )下载最新版本的 Redis 压缩包。
  2. 将压缩包解压到您想要安装 Redis 的目录。
  3. 在 Redis 目录中找到 redis.windows.conf 文件,并将其重命名为 redis.conf。
  4. 打开 redis.conf 文件,并找到 # bind 127.0.0.1 这一行,将其前面的注释符号“#”去掉,这样可以允许远程访问 Redis。
  5. 打开命令提示符窗口(cmd)并导航到 Redis 目录。
  6. 运行命令 redis-server.exe redis.conf 启动 Redis 服务器。
  7. 另外打开一个命令提示符窗口,并导航到 Redis 目录。
  8. 运行命令 redis-cli.exe 连接到正在运行的 Redis 服务器。
  • windows 对 redis 服务指令:
    1. 卸载服务:redis-server --service-uninstall
    2. 开启服务:redis-server --service-start
    3. 停止服务:redis-server --service-stop

现在您已经成功安装和连接到 Redis 服务器。您可以使用 redis-cli 命令行工具来执行各种操作,例如设置键值对、获取值、删除键等。

Docker 安装 Redis

  1. 拉取镜像
1
docker pull redis:5.0.14
  1. 创建挂载文件
1
mkdir -p redis/config
  1. 编写配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output.
logfile ""
# To enable logging to the Windows EventLog, just set 'syslog-enabled' to
# yes, and optionally update the other syslog parameters to suit your needs.
# If Redis is installed and launched as a Windows Service, this will
# automatically be enabled.
# syslog-enabled no
# Specify the source name of the events in the Windows Application log.
# syslog-ident redis
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
# If Redis is to be used as an in-memory-only cache without any kind of
# persistence, then the fork() mechanism used by the background AOF/RDB
# persistence is unnecessary. As an optimization, all persistence can be
# turned off in the Windows version of Redis. This will redirect heap
# allocations to the system heap allocator, and disable commands that would
# otherwise cause fork() operations: BGSAVE and BGREWRITEAOF.
# This flag may not be combined with any of the other flags that configure
# AOF and RDB operations.
# persistence-available [(yes)|no]
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# WARNING: not setting maxmemory will cause Redis to terminate with an
# out-of-memory exception if the heap limit is reached.
#
# NOTE: since Redis uses the system paging file to allocate the heap memory,
# the Working Set memory usage showed by the Windows Task Manager or by other
# tools such as ProcessExplorer will not always be accurate. For example, right
# after a background save of the RDB or the AOF files, the working set value
# may drop significantly. In order to check the correct amount of memory used
# by the redis-server to store the data, use the INFO client command. The INFO
# command shows only the memory used to store the redis data, not the extra
# memory used by the Windows process for its own requirements. Th3 extra amount
# of memory not reported by the INFO command can be calculated subtracting the
# Peak Working Set reported by the Windows Task Manager and the used_memory_peak
# reported by the INFO command.
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# maxmemory-samples 3
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# Event notification ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeot, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are perforemd with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis server but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# include /path/to/local.conf
# include /path/to/other.conf
  1. 启动 redis 容器
1
docker run --name redis -p 6379:6379 -v /home/ecs-user/mdware/redis/config/redis.conf:/usr/local/etc/redis/redis.conf -d redis:5.0.14
  1. 设置密码便于远程连接
1
2
#在redis.conf文件中添加密码配置
requirepass password(123456)

修改配置后需要重新启动容器 docker restart redis 后,即可远程连接:


Docker

1.安装指南

  1. 查看系统内核
1
uname -r

在安装 Docker 之前,首先要检查系统内核是否满足要求,Centos7 Linux 内核:官方建议 3.10 以上,3.8 以上貌似也可。

注意:本文的命令使用的是 root 用户登录执行,不是 root 的话所有命令前面要加 sudo

  1. 使用 root 权限更新 yum 包(生产环境中此步操作需慎重,学习环境可随意)
1
2
yum -y update #升级所有包同时也升级软件和系统内核;​ 
yum -y upgrade #只升级所有包,不升级软件和系统内核
  1. 卸载旧版本(之前安装过)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
yum erase docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine \
docker-ce

rm -rf /var/lib/docker

yum list installed | grep docker
yum remove xxx docker-buildx-plugin.x86_64 docker-compose-plugin.x86_64
  1. 安装 docker 依赖
1
yum install -y yum-utils device-mapper-persistent-data lvm2
  1. 设置 yum 源(以下两个都可用)
1
2
yum-config-manager --add-repo http://download.docker.com/linux/centos/docker-ce.repo(中央仓库)
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo(阿里仓库)
  1. 检索 docker 可用版本
1
yum list docker-ce --showduplicates | sort -r
  1. 安装指定版本
1
2
3
4
5
yum -y install docker-ce-18.03.1.ce #版本太低无法使用
yum -y install docker-ce-20.10.0-3.el7 #可根据k8s版本来确定
yum -y install docker-ce-20.10.0-3.el7 docker-ce-cli-20.10.0-3.el7 containerd.io
#直接安装最新版
yum install docker-ce -y

安装成功样图:

  1. 启动 docker 并设置开机自启
1
2
systemctl start docker
systemctl enable docker

  1. 配置国内镜像源
1
2
3
4
5
6
7
8
9
10
第一步:新建或编辑daemon.json
vi /etc/docker/daemon.json
第二步:daemon.json中编辑如下
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
第三步:重启docker
systemctl restart docker.service
第四步:执行docker info查看是否修改成功
docker info

2.常见指令

查询远程镜像

查询远程 Docker 仓库镜像的主要命令和参数包括:

1. docker search 关键词

在 Docker Hub 上搜索镜像,支持分页显示等参数。

2. docker search --limit=n 关键词

限制搜索结果数量。

3. docker search -f 字段=值关键词

按字段过滤搜索结果。

4. docker search --automated=true 关键词

仅搜索自动构建镜像。

5. docker search --no-trunc 关键词

输出完整的镜像描述。

6. docker search -s 星级关键词

按照仓库星级排列结果。

7. docker search --help

获取 docker search 支持的参数帮助。

8. docker inspect 镜像名

查看远程镜像详细信息。

9. docker pull 镜像[:标签]

从 Docker Hub 拉取指定镜像。

10. docker pull -a

拉取所有 tagged 镜像及其历史版本。

11. docker pull --help

获取 docker pull 支持的参数帮助。

查询本地镜像

Docker 查询镜像的主要命令和参数包括:

1. docker images

查看所有本地镜像,默认列出所有镜像。

2. docker images -q

只输出镜像 ID,不显示其他信息。

3. docker images --filter=since=xxx

过滤出指定时间后创建的镜像。

4. docker images -f dangling=true

查询悬空镜像(即未被使用的镜像)。

5. docker images -a

显示所有镜像,包括中间层镜像。

6. docker images --digests

显示镜像的摘要信息。

7. docker images name

查找指定名称的镜像。

8. docker images --no-trunc

显示完整的镜像名称。

9. docker image inspect image

查看镜像的详细配置信息。

上传到远程仓库

将本地镜像上传到远程仓库

  1. 登录到远程仓库:
1
docker login <registry-name>
  1. 标记本地镜像:
1
docker tag <local-image-name> <registry-name>/<remote-image-name>
  1. 推送镜像到远程仓库:
1
docker push <registry-name>/<remote-image-name>

示例:
将名为 my-image 的本地镜像上传到 Docker Hub 上名为 my-repo 的远程仓库:

1
2
3
docker login docker.io
docker tag my-image my-repo/my-image
docker push my-repo/my-image

[!NOTE] 注意:

  • <registry-name> 是远程仓库的名称,例如 docker.iomy-private-registry.com
  • <remote-image-name> 是你希望在远程仓库中使用的镜像名称。
  • 在推送镜像之前,你需要确保你有权访问远程仓库。

[!TIP] 提示:

  • 你可以使用 docker images 命令来查看本地镜像。
  • 你可以使用 docker search <image-name> 命令来搜索远程仓库中的镜像。
  • 你可以使用 docker pull <registry-name>/<remote-image-name> 命令从远程仓库中拉取镜像。

操作容器实例

进入容器实例内部

1
docker exec -it 容器ID/容器名 bash

从容器内拷贝文件到主机上

1
docker cp 容器ID/容器名:容器内文件路径 目标主机路径

停止/删除容器

要停止并删除所有 Docker 容器,可以使用以下命令:

1
docker stop $(docker ps -aq)

这个命令会:

  • docker ps -aq:查询所有正在运行的容器的 ID 并以空白隔开
  • $(docker ps -aq):执行上述查询并将结果作为子命令的参数
  • docker stop:停止参数列出的所有容器
    停止完容器后,可以使用以下命令删除所有容器:
1
docker rm $(docker ps -aq)

或者使用以下单行命令直接停止并删除:

1
docker stop $(docker ps -aq) && docker rm $(docker ps -aq)

还有其他方法:

1
docker ps -a -q | xargs docker rm

或者:

1
2
docker container stop $(docker container ls -aq) && \
docker container rm $(docker container ls -aq)

以上命令都可以很方便地停止并删除本地所有的 Docker 容器。选择其中一个按需使用即可。

另外,也可以使用 docker system prune 命令进行整理,一次性清理未使用的镜像、网络、数据卷等资源。

查看容器详细信息

1
docker inspect 容器ID

查看 Docker 的 Ip 地址信息

1
ip addr show docker0

查看指定网络信息

1
docker network inspect mynetwork

Docker-Compose

下载 Docker-compose

下载 docker-compose 到 /usr/local/bin 中

1
curl/wget https://github.com/docker/compose/releases/download/v2.23.0/docker-compose-linux-x86_64 #后续版本更新可自由切换为需要的docker-compose版本的url

授权

授予 docker-compose 文件权限

1
chmod +x /usr/local/bin/docker-compose

查看版本

1
docker-compose -v 

创建软连接

1
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Nacos

本地直接部署

  1. 下载 nacos 压缩包
  2. 解压并修改配置文件(单点 or 集群)
    application.properties:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath=/nacos
### Include message field
server.error.include-message=ALWAYS
### Default web server port:
server.port=8848
#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ip=false
### Specify local server's IP:
# nacos.inetutils.ip-address=
#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
spring.datasource.platform=mysql
### Count of DB:
db.num=1
### Connect URL of DB:
db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
db.user.0=nacos
db.password.0=nacos
### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2

cluster.conf:

1
2
2.0.0.1:8846 //ip 为本地内网 ip,不是 localhost/127.0.0.1
2.0.0.1:8847
  1. 创建数据库 nacos,为该数据库配置用户名和密码以及权限,再导入 nacos-mysql.sql 文件自动新建相应数据表。
  2. sh startup.sh -m standalone 单点启动

image.png


Docker 部署 Nacos

## Docker 拉取镜像

拉取 nacos 镜像

1
docker pull nacos/nacos-server #默认拉取最新的nacos版本 ,如果需要拉取别的版本可以加:版本号(如:docker pull nacos/nacos-server:v2.2.0)

配置宿主机挂载目录

这一步是添加映射文件夹,将宿主机的文件映射到 nacos 容器中

1
2
mkdir -p /home/ecs-user/mdware/nacos/logs/                      #新建logs目录
mkdir -p /home/ecs-user/mdware/nacos/conf/ #新建conf目录

## 假启动

启动一个 nacos 容器并复制需要挂载的文件到宿主机,再关闭删除该容器(为了获取默认的配置文件

启动一个 nacos 容器

1
docker run -p 8848:8848 --name nacos -d nacos/nacos-server

复制挂载文件:

1
2
docker cp nacos:/home/nacos/logs/ /home/ecs-user/mdware/nacos/
docker cp nacos:/home/nacos/conf/ /home/ecs-user/mdware/nacos/

关闭删除容器:

1
docker rm -f nacos

实启动(2.0版本后的nacos存在端口偏移

1
2
3
docker run -d --name nacos -p 8848:8848 -p 9848:9848 -p 9849:9849 --privileged=true -e JVM_XMS=256m -e JVM_XMX=256m -e MODE=standalone -v /home/ecs-user/mdware/nacos/logs/:/home/nacos/logs/ -v /home/ecs-user/mdware/nacos/conf/:/home/nacos/conf/ --restart=always nacos/nacos-server #未鉴权
docker run -d --name nacos -p 8848:8848 -p 9848:9848
-p 9849:9849 --privileged=true -e JVM_XMS=256m -e JVM_XMX=256m -e MODE=standalone -e NACOS_AUTH_ENABLE=true -v /home/ecs-user/mdware/nacos/logs/:/home/nacos/logs/ -v /home/ecs-user/mdware/nacos/conf/:/home/nacos/conf/ --restart=always nacos/nacos-server #开启鉴权

参数解释

  1. docker run -d : 启动容器 -d 是后台启动并返回容器 id 的意思
  2. –name nacos :为容器指定一个名称
  3. -p 8848:8848 : 指定端口映射,注意这里的 p 不能大写,大写是随机端口映射
  4. –privileged=true : 扩大容器内的权限,将容器内的权限变为 root 权限,不加的话就是普通用户权限,可能会出现 cannot open directory
  5. -e JVM_XMS=256m : 为 jvm 启动时分配的内存
  6. -e JVM_XMX=256m : 为 jvm 运行过程中分配的最大内存
  7. -e MODE=standalone : 使用 standalone 模式(单机模式),MODE 值有 cluster(集群)模式/standalone 模式两种,MODE 必须大写
  8. -e NACOS_AUTH_ENABLE=true :开启 token 鉴权
  9. -v /home/ecs-user/mdware/nacos/logs/:/home/nacos/logs/ : 将容器的/home/nacos/logs 目录挂载到 /home/ecs-user/mdware/nacos/logs
  10. -v /home/ecs-user/mdware/nacos/conf/:/home/nacos/conf/ : 将容器的/home/nacos/conf 目录挂载到 /home/ecs-user/mdware/nacos/conf –restart=always :重启 docker 时,自动启动相关容器

开启防火墙端口

需要在防火墙开放相关端口,云服务器还需要开放安全组对应端口:

1
2
3
4
5
6
## 开放端口 8848 9848 9849
firewall-cmd --zone=public --add-port=8848/tcp --permanent
## 重启防火墙
firewall-cmd --reload
## 查看所有开启的端口
firewall-cmd --zone=public --list-ports

重启完防火墙之后,需要重启 docker

1
2
## 重启 docker
systemctl restart docker

如需开启集群模式,可修改宿主机挂载的配置文件修改相应内容,再重启 nacos 容器即可更新配置。


Sentinel

  1. 下载 Sentinel 的 jar 包上传至 linux 系统上
  2. 使用 nohup 指令持久化后台运行 jar 包
  3. 后端持久运行 jar 包:
    nohup java -Xms100m -Xmx100m -jar score-api-0.0.1-SNAPSHOT.jar&
    image.png

Seata

服务端配置

  1. 下载 Seata 服务端
  2. 新建一个 namespace 用于 seata 使用:seata
    image.png
  3. 客户端配置- application.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
server:  
port: 7091
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: ${log.home:${user.home}/logs/seata}
extend:
logstash-appender:
destination: 127.0.0.1:4560
kafka-appender:
bootstrap-servers: 127.0.0.1:9092
topic: logback_to_logstash
console:
user:
username: seata
password: seata
seata:
config:
# support: nacos, consul, apollo, zk, etcd3
type: nacos
nacos:
serverAddr: xxx:8848
namespace: 401254cb-2aa6-4be3-bd44-4a139ebdc687 #自定义的命名空间
group: SEATA_GROUP
username: nacos
password: nacos
dataId: seata-server.properties
registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: nacos
nacos:
application: seata-server
server-addr: xxx:8848
namespace: 401254cb-2aa6-4be3-bd44-4a139ebdc687 #自定义的命名空间
group: SEATA_GROUP
cluster: default
username: nacos
password: nacos
store:
# support: file 、 db 、 redis
mode: file
# server:
# service-port: 8091 #If not configured, the default is '${server.port} + 1000' security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.jpeg,/**/*.ico,/api/v1/auth/login
  1. 初始 Mysql 数据库
    新建 seata 库->执行 mysql.sql 初始化脚本->【Seata 1.x 版本 mysql 脚本】压缩包目录 seata/script/db/mysql.sql
    image.png
  2. 导入初始配置到 nacos
  • 修改压缩包目录 seata/script/config-center/config.txt 文件中几处内容:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 存储模式
store.mode=db
store.db.datasource=druid
store.db.dbType=mysql
# 需要根据 mysql 的版本调整 driverClassName
# mysql8 及以上版本对应的 driver:com.mysql.cj.jdbc.Driver
# mysql8 以下版本的 driver:com.mysql.jdbc.Driver
store.db.driverClassName=com.mysql.jdbc.Driver
# 注意根据生产实际情况调整参数 host 和 port
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
# 数据库用户名密码
store.db.user=root
store.db.password=12345678
# 微服务里配置与这里一致,客户端事务组映射配置(要与客户端的 application.yml 中的配置保持一致)
service.vgroupMapping.user-service-seata-service-group=default
service.vgroupMapping.book-service-seata-service-group=default
service.vgroupMapping.borrow-service-seata-service-group=default

[!WARNING] 特别说明
配置事务分组 service.vgroupMapping.dev_tx_group=default
dev_tx_group:需要与客户端保持一致,可以自定义 (如上所示自定义)
default:需要跟客户端和 application.yml 中的 cluster 保持一致
default 必须要等于 registry.conf.cluster = “default”

  • 通过压缩包目录 seatascript/config-center/nacos/nacos-config.sh 将修改后的 config.txt 发布到 nacos 上
1
2
3
4
5
6
7
8
9
# 运行指令,通过 Git Bash Here
sh nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP -t 891d7906-dd03-4b8c-9fe9-a1f0609b3189
# 具体说明参见: http://seata.io/zh-cn/docs/user/configurations.html
# -h: nacos host,默认 localhost
# -p: nacos 端口,默认 8848
# -g: nacos 分组,默认'SEATA_GROUP'.
# -t: 租户信息 Tenant information,对应 nacos namespace ID,默认''
# -u: nacos 用户名,默认''
# -w: nacos 用户密码,默认''

image.png
6. 启动 seata
根据系统环境启动 seata 服务端:
image.png|500

客户端配置

  1. 引入 Maven 依赖
1
2
3
4
5
<!--        seata-->  
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
</dependency>
  1. 修改微服务的 application.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
spring:  
application:
name: book-service
datasource:
url: jdbc:mysql://localhost:3306/springcloud?useUnicode=true&characterEncoding=utf8
username: root
password: 123456
driver-class-name: com.mysql.cj.jdbc.Driver
cloud:
nacos:
discovery:
# 配置 Nacos 注册中心地址
server-addr: xxx:8848
# namespace: 44a80565-86f1-450f-88ea-6f93801cd08c #设置命名空间为开发环境
sentinel:
transport:
# 添加监控页面地址即可
dashboard: 127.0.0.1:8080
seata:
tx-service-group: user-service-seata-service-group #必须配置和服务端的service .vgroupMapping.xxx 一致
# 注册 registry:
# 使用 Nacos
type: nacos
nacos:
# 使用 Seata 的命名空间,这样才能正确找到 Seata 服务,由于组使用的是 SEATA_GROUP,配置默认值就是,就不用配了
namespace: 401254cb-2aa6-4be3-bd44-4a139ebdc687
username: nacos
password: nacos
server-addr: ${spring.cloud.nacos.discovery.server-addr}
# 配置
config:
type: nacos
nacos:
namespace: 401254cb-2aa6-4be3-bd44-4a139ebdc687
username: nacos
password: nacos
server-addr: ${spring.cloud.nacos.discovery.server-addr}
service:
vgroup-mapping:
tx-group-mapping: user-service-seata-service-group #必须配置和服务端的service .vgroupMapping.xxx 一致
  1. 配置完成启动微服务即可

RabbitMQ

  1. 下载安装 RabbitMQErlang 并配置环境变量
  2. 安装完成后找到安装文件路径,找到 sbin 目录下,打开命令行 cmd,在命令行里切换到 sbin 目录下,输入如下命令:
1
rabbitmq-plugins enable rabbitmq_management

运行成功后,打开任务资源管理器,找到 rabbitmq 服务右键重新启动。或者也可以,双击 sbin 下的 rabbitmq-server.bat(双击后稍等片刻)

  1. 安装完成后,通过浏览器访问 RabbitMQ 控制台 http://localhost:15672

    • 默认的端口号:5672
    • 默认的用户和密码是 guest
    • 管理后台的默认端口号:15672

image.png

常用命令:

  1. windows

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    #查看端口占用情况
    netstat -ano | findstr :15672
    #下载插件
    rabbitmq-plugins.bat enable rabbitmq_management
    #查看是否安装成功
    rabbitmqctl.bat status
    #如果查看报错 ,可能是后台服务器没有开启,开启后台服务器再重试 #如果查看报错 ,可能是后台服务器没有开启,开启后台服务器再重试
    net start RabbitMQ
    #查看RabbitMQ已有用户以及用户对应的角色信息
    rabbitmqctl.bat list_users
    #RabbitMQ的默认用户名和密码是guest ,只能在本机访问,新增一个用户,并赋予超级管理员角色
    rabbitmqctl add_user 用户名密码
    #设置用户角色
    rabbitmqctl set_user_tags 用户名 administrator
  2. linux

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    #列出所有队列 :
    rabbitmqctl list_queues
    #列出所有交换器 :
    rabbitmqctl list_exchanges
    #列出所有绑定 :
    rabbitmqctl list_bindings
    #列出所有连接 :
    rabbitmqctl list_connections
    #列出所有通道 :
    rabbitmqctl list_channels
    #列出所有消费者 :
    rabbitmqctl list_consumers
    #查看队列的状态信息 :
    rabbitmqctl list_queues name messages_ready messages_unacknowledged
    #查看交换器的状态信息 :
    rabbitmqctl list_exchanges name type
    #查看连接的状态信息 :
    rabbitmqctl list_connections name user state
    #查看通道的状态信息 :
    rabbitmqctl list_channels connection_name user number_of_consumers
    #查看 RabbitMQ 节点的状态信息:
    rabbitmqctl status
    #查看 RabbitMQ 节点的详细状态信息:
    rabbitmqctl status --verbose
    #查看 RabbitMQ 节点的配置信息:
    rabbitmqctl environment
    #查看 RabbitMQ 节点的运行日志:
    rabbitmqctl report
    #查看 RabbitMQ 节点的内存使用情况:
    rabbitmqctl eval 'memory'

RocketMQ

RocketMQ 依赖 Java 环境,要求有 JDK1.8 以上版本。RocketMQ 支持三种集群部署模式,单机部署模式如下教程所示。

创建 NameServer 服务

拉取 Rocketmq 镜像

1
docker run -d -p 9876:9876 --name rmqserver  apacherocketmq/rocketmq-server

创建 Nameserver 数据卷挂载路径

1
mkdir -p  /home/ecs-user/mdware/rocketmq/data/namesrv/logs   /home/ecs-user/mdware/rocketmq/data/namesrv/store

创建 Namesrv 容器

1
2
3
4
5
6
7
8
9
docker run -d \
--restart=always \
--name rmqnamesrv \
-p 9876:9876 \
-v /home/ecs-user/mdware/rocketmq/data/namesrv/logs:/root/logs \
-v /home/ecs-user/mdware/rocketmq/data/namesrv/store:/root/store \
-e "MAX_POSSIBLE_HEAP=100000000" \
rocketmqinc/rocketmq \
sh mqnamesrv

容器参数说明:

参数 说明
-d 以守护线程方式启动
–restart=always docker 重启时候容器自动重启
-p 9876:9876 把容器内的端口 9876 挂载到宿主机 9876 上面
-e “MAX_POSSIBLE_HEAP=100000000” 设置容器的最大堆内存为 100000000
sh mqnamesrv 启动 namesrv 服务

创建 Broke 节点

创建 Broke 数据卷挂载路径

1
mkdir -p /home/ecs-user/mdware/rocketmq/data/broker/logs /home/ecs-user/mdware/rocketmq/data/broker/store /home/ecs-user/mdware/rocketmq/conf

创建配置文件

echo '' > broker.conf

vi /docker/rocketmq/conf/broker.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 所属集群名称,如果节点较多可以配置多个
brokerClusterName = DefaultCluster
#broker名称 ,master 和 slave 使用相同的名称,表明他们的主从关系
brokerName = broker-a
#0表示Master ,大于 0 表示不同的 slave
brokerId = 0
#表示几点做消息删除动作 ,默认是凌晨 4 点
deleteWhen = 04
#在磁盘上保留消息的时长 ,单位是小时
fileReservedTime = 48
#有三个值 :SYNC_MASTER,ASYNC_MASTER,SLAVE;同步和异步表示 Master 和 Slave 之间同步数据的机制;
brokerRole = ASYNC_MASTER
#刷盘策略 ,取值为:ASYNC_FLUSH,SYNC_FLUSH 表示同步刷盘和异步刷盘;SYNC_FLUSH 消息写入磁盘后才返回成功状态,ASYNC_FLUSH 不需要;
flushDiskType = ASYNC_FLUSH
# 设置 broker 节点所在服务器的外网 ip 地址(内网 ip 通过 ifconfig 查看) 换成自己的主机的 IP
brokerIP1 = 外网 IP
# 磁盘使用达到 85%之后,生产者再写入消息会报错 CODE: 14 DESC: service not available now, maybe disk full
diskMaxUsedSpaceRatio=85

创建 Docker 容器

1
2
3
4
5
6
7
8
9
10
11
12
13
docker run -d  \
--restart=always \
--name rmqbroker \
--link rmqnamesrv:namesrv \
-p 10911:10911 \
-p 10909:10909 \
-v /home/ecs-user/mdware/rocketmq/data/broker/logs:/root/logs \
-v /home/ecs-user/mdware/rocketmq/data/broker/store:/root/store \
-v /home/ecs-user/mdware/rocketmq/conf/broker.conf:/opt/rocketmq-4.4.0/conf/broker.conf \
-e "NAMESRV_ADDR=namesrv:9876" \
-e "MAX_POSSIBLE_HEAP=200000000" \
rocketmqinc/rocketmq \
sh mqbroker -c /opt/rocketmq-4.4.0/conf/broker.conf

参数说明

参数 说明
–link rmqnamesrv:namesrv 和 rmqnamesrv 容器通信
-p 10911:10911 把容器的非 vip 通道端口挂载到宿主机
-p 10909:10909 把容器的 vip 通道端口挂载到宿主机
-e “NAMESRV_ADDR=namesrv:9876” 指定 namesrv 的地址为本机 namesrv 的 ip 地址:9876
-e “MAX_POSSIBLE_HEAP=200000000” rocketmqinc/rocketmq sh mqbroker 指定 broker 服务的最大堆内存
sh mqbroker -c /opt/rocketmq-4.4.0/conf/broker.conf 指定配置文件启动 broker 节点

创建 Rocketmq-consol 服务

拉取镜像

1
docker pull pangliang/rocketmq-console-ng

构建容器

1
2
3
4
5
6
7
docker run -d \
--restart=always \
--name rmqadmin \
-e "JAVA_OPTS=-Drocketmq.namesrv.addr=172.17.0.3:9876 \
-Dcom.rocketmq.sendMessageWithVIPChannel=false" \
-p 9999:8080 \
pangliang/rocketmq-console-ng

[!WARNING] 注意
JAVA_OPTS=-Drocketmq.namesrv.addr=172.17.0.3:9876 中的 ip 地址是 docker 容器 rmqnamesrv 的容器内部 ip 地址,使用 docker inspect 查看获取

开放端口

如果是云服务器还需要开放安全组

1
2
firewall-cmd --zone=public --add-port=9999/tcp --permanent
firewall-cmd --zone=public --add-port=10911/tcp --permanent

部署成功结果如下图所示:

|800


ElasticSearch

es默认用户名密码是:elastic / changeme

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
elasticsearch:
image: alleyf/elasticsearch:7.17.6
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
# 设置集群名称
cluster.name: elasticsearch
# 以单一节点模式启动
discovery.type: single-node
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
volumes:
- A:/docker/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
- A:/docker/elk/elasticsearch/data:/usr/share/elasticsearch/data
- A:/docker/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
# network_mode: "host"
restart: always

青龙面板

拉取镜像

1
docker pull whyour/qinglong:latest

启动容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
docker run -dit \
-v $PWD/ql/data:/ql/data \
-p 5700:5700 \
-e QlBaseUrl="/" \
-e QlPort="5700" \
--name qinglong \
--hostname qinglong \
--restart unless-stopped \
whyour/qinglong:latest
docker run -dit \
-v $PWD/ql/data:/ql/data \
# 冒号后面的 5700 为默认端口,如果设置了 QlPort, 需要跟 QlPort 保持一致
-p 5700:5700 \
# 部署路径非必须,比如 /test
-e QlBaseUrl="/" \
# 部署端口非必须,当使用 host 模式时,可以设置服务启动后的端口,默认 5700
-e QlPort="5700" \
--name qinglong \
--hostname qinglong \
--restart unless-stopped \
whyour/qinglong:latest

启动容器后防火墙和安全组放行 5700 端口。

安装依赖

  1. nodejs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
request
crypto-js
prettytable
dotenv
jsdom
date-fns
tough-cookie
tslib
ws@7.4.3
ts-md5
jsdom -g
jieba
fs
form-data
json5
global-agent
png-js
@types/node
require
typescript
js-base64
axios
moment
  1. Python3
1
2
3
4
5
6
requests
canvas
ping3
jieba
PyExecJS
aiohttp
  1. Linux
1
2
3
bizCode
bizMsg
lxm

有部分 python 和 linux 依赖可能安装失败,不过不用担心大部分脚本都依赖 nodejs。

添加订阅

|500

常用的仓库地址

Faker2 助力池版

星标 1300+ 脚本 300+ 
1
2
3
4
旧版:
ql repo https://git.metauniverse-cn.com/https://github.com/shufflewzc/faker2.git "jd_|jx_|gua_|jddj_|jdCookie" "activity|backUp" "^jd[^_]|USER|function|utils|sendNotify|ZooFaker_Necklace.js|JDJRValidator_|sign_graphics_validate|ql|JDSignValidator|magic|depend|h5sts" "main"
新版:
https://github.com/shufflewzc/faker2.git
Faker3 纯净版
星标 700+ 脚本 300+ 
1
2
3
4
旧版:
ql repo https://git.metauniverse-cn.com/https://github.com/shufflewzc/faker3.git "jd_|jx_|gua_|jddj_|jdCookie" "activity|backUp" "^jd[^_]|USER|function|utils|sendNotify|ZooFaker_Necklace.js|JDJRValidator_|sign_graphics_validate|ql|JDSignValidator|magic|depend|h5sts" "main"
新版:
https://github.com/shufflewzc/faker3.git
gys619
星标 1200+ 脚本 300+ 
1
2
3
4
旧版:
ql repo https://github.com/gys619/Absinthe.git "jd_|jx_|jddj_|gua_|getJDCookie|wskey" "activity|backUp" "^jd[^_]|USER|utils|ZooFaker_Necklace|JDJRValidator_|sign_graphics_validate|jddj_cookie|function|ql|magic|JDJR|JD" "main"
新版:
https://github.com/gys619/Absinthe.git
Akali5
星标 90+ 脚本 300+ 
1
2
3
4
旧版:
ql repo https://github.com/Akali5/jd-depot.git "jd_|jx_|jddj_|gua_|getJDCookie|wskey" "activity|backUp" "^jd[^_]|USER|utils|ZooFaker_Necklace|JDJRValidator_|sign_graphics_validate|jddj_cookie|function|ql|magic|JDJR|sendNotify|depend|h5|jdspider"
新版:
https://github.com/Akali5/jd-depot.git
KingRan
星标 170+ 脚本 110+ 
1
2
3
4
旧版:
ql repo https://github.com/KingRan/KR.git "jd_|jx_|jdCookie" "activity|backUp" "^jd[^_]|USER|utils|function|sign|sendNotify|ql|JDJR"
新版:
https://github.com/KingRan/KR.git
6dylan6
星标 500+ 脚本 90+ 
1
2
3
4
旧版:
ql repo https://github.com/6dylan6/jdpro.git "jd_|jx_|jddj_" "backUp" "^jd[^_]|USER|JD|function|sendNotify"
新版:
https://github.com/6dylan6/jdpro.git
zero205
星标 500+ 脚本 80+ 
1
2
3
4
旧版:
ql repo https://github.com/zero205/JD_tencent_scf.git "jd_|jx_|jdCookie" "backUp|icon" "^jd[^_]|USER|sendNotify|sign_graphics_validate|JDJR|JDSign|ql" "main"
新版:
https://github.com/zero205/JD_tencent_scf.git
ccwav 通知增强版和 CK 检测
星标 400+ 
1
2
3
4
5
6
7
旧版:
不包含 sendNotify:
ql repo https://github.com/ccwav/QLScript2.git "jd_" "NoUsed" "ql|utils|USER_AGENTS|jdCookie|JS_USER_AGENTS"
包含 sendNotify:
ql repo https://github.com/ccwav/QLScript2.git "jd_" "NoUsed" "ql|sendNotify|utils|USER_AGENTS|jdCookie|JS_USER_AGENTS"
新版:
https://github.com/ccwav/QLScript2.git

设置环境变量

在浏览器的无痕模式中打开 https://m.jd.com/ ,然后使用手机号登陆,cookie 有效时间为一个月,失效后需要重新登陆获取新的 cookie 信息。

1
https://m.jd.com/

浏览器安装 editcookie 插件,登录后点击该插件搜索 pt 关键词,可以看到 pt_pin 和 pt_key,复制字段值在青龙面板添加 JD_COOKIE环境变量,格式为:

1
pt_key=xxx;pt_pin=xxx;

Crawlab

拉取镜像

保证已经安装好 Docker,并能够拉取 Crawlab 和 MongoDB 的镜像。

1
2
docker pull crawlabteam/crawlab
docker pull mongo

配置 Docker-compose 文件

创建配置文件并命名为 docker-compose.yml,内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: '3.3'
services:
master:
image: crawlabteam/crawlab
container_name: crawlab_master
environment:
CRAWLAB_NODE_MASTER: "Y"
CRAWLAB_MONGO_HOST: "mongo"
ports:
- "8080:8080"
depends_on:
- mongo
mongo:
image: mongo:latest

启动 Crawlab

执行以下命令启动 Crawlab 以及 MongoDB。

1
docker-compose up -d

现在您可以打开浏览器并导航到 http://localhost:8080 并开始使用 Crawlab。

Coze-discord-proxy

通过 discord 代理 coze 使用 GPT4

1
2
3
4
5
6
7
8
9
10
11
12
docker run --name coze -d --restart always \
-p 7077:7077 \
-v $(pwd)/data:/app/coze-discord-proxy/data \
-e BOT_TOKEN="MTIwOTM0MTM1NjM2Mzg4MjUwNg.Gv7teI.0D7hXoAND8B9x0VZR_yg3vdkncClRweXYNzdkw" \
-e GUILD_ID="1193462695295467550" \
-e COZE_BOT_ID="1209341026255376415" \
-e PROXY_SECRET="coze123456" \
-e CHANNEL_ID="1193462697321300008" \
-e USER_ID="1080049025551712267" \
-e USER_AUTHORIZATION="MTA4MDA0OTAyNTU1MTcxMjI2Nw.GnP5MC.fon-G-4PRr8KU0zlEjb6By4UU_-c8iBoxkimHs" \
-e TZ=Asia/Shanghai \
deanxv/coze-discord-proxy

docker-compose.yml :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3.4'
services:
coze:
image: deanxv/coze-discord-proxy:latest
container_name: coze
restart: always
ports:
- "7077:7077"
volumes:
- ./data:/app/coze-discord-proxy/data
environment:
- USER_ID=1080049025551712267 # 必须修改为我们 discord 用户的 ID
- USER_AUTHORIZATION=MTA4MDA0OTAyNTU1MTcxMjI2Nw.GnP5MC.fon-G-4PRr8KU0zlEjb6By4UU_-c8iBoxkimHs # 必须修改为我们 discord 用户的授权密钥
- BOT_TOKEN=MTIwOTM0MTM1NjM2Mzg4MjUwNg.Gv7teI.0D7hXoAND8B9x0VZR_yg3vdkncClRweXYNzdkw # 必须修改为监听消息的 Bot-Token
- GUILD_ID=1193462695295467550 # 必须修改为两个机器人所在的服务器 ID
- COZE_BOT_ID=1209341026255376415 # 必须修改为由 coze 托管的机器人 ID
- CHANNEL_ID=1193462697321300011 # 默认频道-(目前版本下该参数仅用来活跃机器人)
- PROXY_SECRET=coze123456 # [可选]接口密钥-修改此行为请求头校验的值(多个请以,分隔)
- PROXY_URL= http://121.196.245.95:7890
- TZ=Asia/Shanghai

部署完成后必须立即在服务器开启 clash-linux 代理 proxy_on,使用 rule 代理规则,然后开启防火墙和安全组 7077端口。


Windows

环境变量

  1. cmd:
1
2
echo %Path%  #输出所有环境变量文件路径
echo %Path% #刷新环境变量
  1. powershell:
1
2
$env:path/$Env:PATH  #输出所有环境变量文件路径
[System.Environment]::SetEnvironmentVariable("PATH",$env:path+";C:\veryhappy",[System.EnvironmentVariableTarget]::User/Machine) #设置用户/系统新环境变量

查看进程

查询 pid 对应的进程所运行的程序。

1
tasklist|findstr pid

关闭进程

1
2
taskkill /f /t /im nginx/java.exe
taskkill /f /pid <查询的PID号>

image.png

查看端口

1
2
3
netstat -ano #查看所有端口
netstat -ano | findstr <端口号>
netstat -ano | find "关键字"

[!warning] 端口占用
问题:端口并没有被占用却提示端口被占用
方法:重启NAT网络就可以解决
net stop winnat 接着 net start winnat

重启NAT网络

1
2
net stop winnat
net start winnat

[!NOTE]
如果本地端口没有被占用,检查是否是与hyper-v保留端口冲突了
查看hyper-v启动后的保留端口范围
netsh interface ipv4 show excludedportrange protocol=tcp

  • 以管理员身份运行 powershell
  • 停止Windows NAT 驱动程序
    net stop winnat
  • 使用以下命令永久排除6379作为保留端口
    netsh int ipv4 add excludedportrange protocol=tcp startport=6379 numberofports=1 store=persistent
    提示:关键在于store=persistent参数表示持久化信息
    上面的命令可以通过修改numberofports参数保留startport开始的多个端口
  • 开启Windows NAT 驱动程序
    net start winnat

生成依赖

当使用 python 开发项目时使用以下命令生成项目依赖文件:

1
2
pip freeze > requirements.txt #含有python环境的所有依赖
pipreqs . --encoding=utf8 --force #仅包含项目用到的依赖 ,可能会缺少部份依赖需手动补充

Linux

安全自查

查询系统存在的账户使用情况

1
cat /etc/passwd

查看iptables防火墙规则,列出iptables当前配置的所有规则

1
iptables -L -n

切换用户

1
sudo su username(root) #切换身份到目标用户

磁盘空间

df命令在Linux系统中用于显示文件系统的磁盘空间使用情况。-hdf命令的一个选项,它代表“human-readable”,意味着输出将以易于阅读的格式显示,例如以KB、MB、GB等单位,而不是默认的块(block)大小。

使用df -h时,你会得到以下类型的信息:

  • 文件系统(Filesystem):磁盘或分区的名称。
  • 容量(Size):文件系统的总大小。
  • 已用(Used):已使用的磁盘空间。
  • 可用(Available):当前可用的磁盘空间。
  • 已用百分比(Use%):磁盘空间使用的百分比。
  • 挂载点(Mounted on):文件系统挂载的目录。
1
dh -f

例如,执行df -h命令后,输出可能如下所示:

1
2
3
Filesystem      Size  Used Avail Use% Mounted on 
/dev/sda1 100G 20G 80G 20% /
tmpfs 3.9G 0 3.9G 0% /dev/shm

在这个例子中,/dev/sda1是一个磁盘分区,总大小为100GB,已使用20GB,剩余80GB,使用了20%的空间,并且挂载在根目录/上。tmpfs是一个临时文件系统,通常用于存储临时文件,大小为3.9GB,尚未使用任何空间。

内核信息

查看 Linux 系统内核的主要命令和方法有:

  1. uname 命令
1
uname -r

显示当前正在使用的内核版本号。

  1. 查看/proc/version 文件
1
cat /proc/version

文件中包含内核名称、版本、编译日期等详细信息。

  1. rpm 命令(对于 RPM 包系统)
1
rpm -q kernel-headers kernel-devel

显示内核头文件和开发包的版本。

  1. dpkg 命令(对于 Debian/Ubuntu 系统)
1
dpkg -l | grep linux-image

显示所有已安装的内核包。

  1. 查看 grub 启动项

对于 grub 启动引导,可以查看 /boot/grub/grub.cfg 文件,找到默认引导的内核版本。

  1. 系统属性

如系统设置->关于本机中也会显示内核版本。

  1. sysctl 命令
1
sysctl kernel.version

直接显示内核版本号。

  1. 主机名和操作系统文件
1
2
cat /etc/hostname
cat /etc/*release

查看端口

1
2
3
netstat -tuln | grep :<端口号>
netstat -nplt #查看所有端口
netstat -tuln #查看所有端口

netstat 参数含义:
-a (all)显示所有选项,默认不显示 LISTEN 相关
-t (tcp)仅显示 tcp 相关选项
-u (udp)仅显示 udp 相关选项
-n 拒绝显示别名,能显示数字的全部转化成数字。
-l 仅列出有在 Listen (监听) 的服務状态
-p 显示建立相关链接的程序名
-r 显示路由信息,路由表
-e 显示扩展信息,例如 uid 等
-s 按各个协议进行统计
-c 每隔一个固定时间,执行该 netstat 命令。

查看程序

1
2
ps -ef | grep java #查询java进程的程序
ps -ef #查看正在运行的进程

安装服务

yum 工具中常见的几个主要参数如下:

  • y: 执行命令时根据提示自动进行 Yes 答复。等同于–assumeyes。
  • C: 只检查是否需要进行包更新,但不实际安装/升级包。
  • d: 下载包文件但不安装/升级。通常与-y 一起使用。
  • e: 安装程序时产生详细输出信息。
  • q: 执行”搜索”操作时本身不输出任何搜索结果,只返回退出状态。
  • R: 将安装时需要下载的依赖包一并保存下来可以离线安装。
  • y: 和–assumeyes 参数一样,自动答复 yes,而不询问或提示。
  • update: 更新已安装的软件包到最新版本。
  • upgrade: 升级系统,更新所有软件包到最新版本。
  • check-update:检查是否有更新。
  • list: 显示已安装和可更新的软件包列表。
  • search: 搜索软件包。
  • install: 安装软件包。
  • remove: 只删除软件包文件,但不删除软件的配置文件和数据文件。
  • erase: 除了删除软件包文件外,还会删除该软件产生的所有配置文件和数据文件。实现了更彻底的卸载和清理。

防火墙

Centos 7

  1. 防火墙的开启、关闭、禁用命令

(1)设置开机启用防火墙:systemctl enable firewalld.service
(2)设置开机禁用防火墙:systemctl disable firewalld.service
(3)启动防火墙:systemctl start firewalld
(4)关闭防火墙:systemctl stop firewalld
(5)重启防火墙:systemctl restart firewalld重启后新添加的端口配置才生效
(5)检查防火墙状态:systemctl status firewalld 

  1. 使用 firewall-cmd 配置端口

(1)查看防火墙状态:firewall-cmd --state
(2)重新加载配置:firewall-cmd --reload
(3)查看开放的端口:firewall-cmd --list-ports
(4)开启防火墙端口:firewall-cmd --zone=public --add-port=9200/tcp --permanent

文件处理

查找文件

查找含有指定文件名的目录

1
find / -name 文件名

查看目录

显示指定目录的详细信息

1
ls -l [目录或者文件]

显示指定目录的隐藏文件

1
ls -al [目录或者文件]

查找文件

1
find -name .bashrc

查看文件大小

显示更加人性化

1
du -h file.txt

使用 ls 命令结合选项-l 或者–size,可以查看文件的详细信息,包括文件大小。

1
ls -l file.txt

删除文件

强制删除不提示

1
rm -f 文件名

删除整个文件夹并不提示

1
rm -rf 目录名

复制迁移文件

复制单个文件到文件夹中

1
cp 文件 文件夹

复制整个文件夹到另一个文件夹中

1
cp -r 源文件夹 目标文件夹

重命名文件或文件夹

1
mv 源文件名 新文件名

移动文件或文件夹

1
mv 源文件 目标文件

查看文件

查看文件详细信息

1
stat 文件名

查看文件

1
more/cat 文件名

查看文件,要求显示行号并分页

1
cat -n 文件名 | more

默认查看文件后 10 行

1
tail 文件名

查看文件末尾 5 行的内容

1
tail -n 5 文件名

实时追踪文档的更新

1
tail -f 文件名

输出你好

1
echo 你好

在文件末尾添加内容

1
echo 内容 >> 文件名

覆盖文件的所有内容

1
echo 新内容 > 文件名

把文件内容写入到另一个文件的末尾

1
cat 文件名 1 >> 文件名 2

在当前目录建立一个指向/tmp 目录的硬链接

1
ln /tmp 自定义链接名

在当前目录建立一个指向/tmp 目录的软链接

1
ln -s /tmp 自定义链接 h

查看/tmp 目录是否存在名为 log.txt 文件

1
find /tmp -name log.txt

查看/tmp 目录是否存在小于超过 2M 的文件

1
find /tmp -size -2M

查看整个系统是否存在大于 100M 的文件

1
find / -size +100M

查看文件中,在第几行有 yes

1
cat 文件名 | grep -n yes

查看文件中,在第几行有 yes 并忽略大小写

1
cat 文件名 | grep -ni yes

展示树形结构

tree 命令可以用来以树形结构显示目录的内容。它的主要用法如下:

  1. 显示当前目录下的文件和目录结构:
1
tree
  1. 显示指定目录下的文件和目录结构:
1
tree /path/to/directory
  1. 只显示指定深度内的结构:
1
tree -L 2  // 只显示两层深度的目录结构
  1. 不递归显示,只显示指定目录的直接内容:
1
tree -I
  1. 隐藏空目录:
1
tree -I -D
  1. 输出为颜色图示:
1
tree -C
  1. 指定后缀只显示匹配文件的结构:
1
tree -P '*.txt'
  1. 将结构输出到指定文件:
1
tree > directory.txt
  1. 对输出进行压缩:
1
tree -C -z

解压文件

把文件压缩为.gz 尾缀的压缩文件

1
gzip 文件.gz

解压 gz 尾缀的文件

1
gunzip 文件.gz

把/tmp 文件夹的所有文件压缩成.zip 尾缀的压缩文件

1
zip -r 文件.zip /tmp

解压 zip 尾缀的文件,解压到/root 目录下

1
unzip -d  /root 文件.zip

把/tmp 文件夹的所有文件压缩为.tar.gz 尾缀的压缩文件

1
tar -zcvf 文件.tar.gz /tmp

解压.tar.gz 尾缀的文件,解压到/root 目录下

1
tar -zxvf 文件.tar.gz -C  /root

进程管理

查看某进程的资源占用情况

1
top -p 进程号

找出占用 CPU 最多的进程

1
top,然后按大写的 P

找出占用内存最多的进程

1
top,然后按大写的 M

查看正在运行的进程

1
ps -ef

查看有关 mysql 的进程信息

1
ps -ef | grep mysql

正常杀死进程

1
2
kill -15 PID 号
kill -12 pid #安全关闭

强制杀死进程

1
kill -9 pid #强制关闭

找出 CPU 占用高的进程 ID

1
top -c

根据进程 ID,找出 CPU 占用高的线程 ID

1
ps H -eo pid,tid,%cpu | grep 进程 ID

输出‘指定进程 ID 的 16 进制’

1
printf "%x\n" 进程 ID

根据‘指定进程 ID 的 16 进制’,查看到底是哪里 java 的代码问题

1
jstack 进程 ID | grep 指定进程 ID 的 16 进制 -A20

README

简单模板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# Title
<h1 align="center">Welcome to 👋</h1>
<p>
<img alt="Version" src="https://img.shields.io/badge/version-1.0.0-blue.svg?cacheSeconds=2592000" />
<img src="https://img.shields.io/badge/node-%3E%3D16.13.0-blue.svg" />
<a href="https://github.com/Alleyf/big-event#readme" target="_blank">
<img alt="Documentation" src="https://img.shields.io/badge/documentation-yes-brightgreen.svg" />
</a>
<a href="https://github.com/Alleyf/big-event/graphs/commit-activity" target="_blank">
<img alt="Maintenance" src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" />
</a>
<a href="https://github.com/Alleyf/big-event/blob/master/LICENSE" target="_blank">
<img alt="License: MIT" src="https://img.shields.io/github/license/Alleyf/big-event" />
</a>
</p>
## Introduction
>
## Install
## Dependency
* springboot3
* vue3
* vite
* oss
* vue-router
* axios
* element-plus
* tinymce
* quill
* pinia
## Usage
## Demo
## Contributing
Contributions, issues and feature requests are welcome!<br />Feel free to
check [issues page](https://github.com/Alleyf/big-event/issues). You can also take a look at
the [contributing guide](https://github.com/Alleyf/big-event/blob/master/CONTRIBUTING.md).
## Show your support
Give a ⭐️ if this project helped you!
## License
Copyright © 2024 [Alleyf](https://github.com/Alleyf).<br />
This project is [MIT](https://github.com/Alleyf/big-event/blob/master/LICENSE) licensed.
## Acknowledgment
## Author
👤 **Alleyf**
* Website: https://alleyf.github.io/
* Github: [@Alleyf](https://github.com/Alleyf)
### 🏠 [Homepage](https://github.com/Alleyf/big-event#readme)
## Star History
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Alleyf/big-event&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Alleyf/big-event&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Alleyf/big-event&type=Date" />
</picture>

完整模板

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
<a id="readme-top"></a>
<!-- PROJECT SHIELDS -->
[![Contributors] [contributors-shield]][contributors-url]
[![Forks] [forks-shield]][forks-url]
[![Stargazers] [stars-shield]][stars-url]
[![Issues] [issues-shield]][issues-url]
[![MIT License] [license-shield]][license-url]
[![LinkedIn] [linkedin-shield]][linkedin-url]
<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://github.com/github_username/repo_name">
<img src="images/logo.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">project_title</h3>
<p align="center">
project_description
<br />
<a href="https://github.com/github_username/repo_name"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://github.com/github_username/repo_name">View Demo</a>
·
<a href="https://github.com/github_username/repo_name/issues">Report Bug</a>
·
<a href="https://github.com/github_username/repo_name/issues">Request Feature</a>
</p>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
[![Product Name Screen Shot] [product-screenshot]]( https://example.com )
<p align="right">(<a href="#readme-top">back to top</a>)</p>
### Built With
* [![Vue] [Vue.js]][Vue-url]
* [![Bootstrap] [Bootstrap.com]][Bootstrap-url]
* [![JQuery] [JQuery.com]][JQuery-url]
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- GETTING STARTED -->
## Getting Started
This is an example of how you may give instructions on setting up your project locally.
To get a local copy up and running follow these simple example steps.
### Prerequisites
This is an example of how to list things you need to use the software and how to install them.
npm
`npm install npm@latest -g`
### Installation
1. Get a free API Key at [https://example.com](https://example.com)
2. Clone the repo
`git clone https://github.com/github_username/repo_name.git`
1. Install NPM packages
`npm install`
1. Enter your API in `config.js`
`const API_KEY = 'ENTER YOUR API';`
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- USAGE EXAMPLES -->
## Usage
_For more examples, please refer to the [Documentation](https://example.com) _
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ROADMAP -->
## Roadmap
- [ ] Feature 1
- [ ] Feature 2
- [ ] Feature 3
- [ ] Nested Feature
See the [open issues](https://github.com/github_username/repo_name/issues) for a full list of proposed features (and known issues).
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTRIBUTING -->
## Contributing
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- LICENSE -->
## License
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTACT -->
## Contact
- [Alleyf@Blog](https://alleyf.github.io)
- [Alleyf@Email](alleyf@qq.com)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
* []()
* []()
* []()
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- MARKDOWN LINKS & IMAGES -->
[Vue.js](https://img.shields.io/badge/Vue.js-35495E?style=for-the-badge&logo=vuedotjs&logoColor=4FC08D)
[Vue-url](https://vuejs.org/)

CLI 生成器

通过 GitHub - kefranabg/readme-md-generator: 📄 CLI that generates beautiful README.md files 在命令行输入 npx指令生成 readme 文件。

生成前可以先配置package.json文件包含项目相关信息,生成器会读取文件中的信息进行生成。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// The package.json is not required to run README-MD-GENERATOR
{
"name": "readme-md-generator",
"version": "1.0.0",
"description": "CLI that generates beautiful README.md files.",
"author": "Alleyf",
"license": "MIT",
"homepage": " https://github.com/kefranabg/readme-md-generator#readme" ,
"repository": {
"type": "git",
"url": "git+ https://github.com/kefranabg/readme-md-generator.git"
},
"bugs": {
"url": " https://github.com/kefranabg/readme-md-generator/issues"
},
"engines": {
"npm": ">=5.5.0",
"node": ">=9.3.0"
}
}

命令行输入以下指令生成:

1
npx readme-md-generator -y

|500

SpringCloud-Alibaba 版本

由于 Spring Boot 3.0,Spring Boot 2.7~2.4 和 2.4 以下版本之间变化较大,目前企业级客户老项目相关 Spring Boot 版本仍停留在 Spring Boot 2.4 以下,为了同时满足存量用户和新用户不同需求,社区以 Spring Boot 3.0 和 2.4 分别为分界线,同时维护 2022.x、2021.x、2.2.x 三个分支迭代。如果不想跨分支升级,如需使用新特性,请升级为对应分支的新版本。 为了规避相关构建过程中的依赖冲突问题,我们建议可以通过 云原生应用脚手架 进行项目创建。

2022.x 分支

适配 Spring Boot 3.0,Spring Cloud 2022.x 版本及以上的 Spring Cloud Alibaba 版本按从新到旧排列如下表(最新版本用*标记): (注意,该分支 Spring Cloud Alibaba 版本命名方式进行了调整,未来将对应 Spring Cloud 版本,前三位为 Spring Cloud 版本,最后一位为扩展版本,比如适配 Spring Cloud 2022.0.0 版本对应的 Spring Cloud Alibaba 第一个版本为:2022.0.0.0,第个二版本为:2022.0.0.1,依此类推)

Spring Cloud Alibaba Version Spring Cloud Version Spring Boot Version
2022.0.0.0* Spring Cloud 2022.0.0 3.0.2
2022.0.0.0-RC2 Spring Cloud 2022.0.0 3.0.2
2022.0.0.0-RC1 Spring Cloud 2022.0.0 3.0.0

2021.x 分支

适配 Spring Boot 2.4,Spring Cloud 2021.x 版本及以上的 Spring Cloud Alibaba 版本按从新到旧排列如下表(最新版本用*标记):

Spring Cloud Alibaba Version Spring Cloud Version Spring Boot Version
2021.0.5.0* Spring Cloud 2021.0.5 2.6.13
2021.0.4.0 Spring Cloud 2021.0.4 2.6.11
2021.0.1.0 Spring Cloud 2021.0.1 2.6.3
2021.1 Spring Cloud 2020.0.1 2.4.2

2.2.x 分支

适配 Spring Boot 为 2.4,Spring Cloud Hoxton 版本及以下的 Spring Cloud Alibaba 版本按从新到旧排列如下表(最新版本用*标记):

Spring Cloud Alibaba Version Spring Cloud Version Spring Boot Version
2.2.10-RC1* Spring Cloud Hoxton.SR12 2.3.12.RELEASE
2.2.9.RELEASE Spring Cloud Hoxton.SR12 2.3.12.RELEASE
2.2.8.RELEASE Spring Cloud Hoxton.SR12 2.3.12.RELEASE
2.2.7.RELEASE Spring Cloud Hoxton.SR12 2.3.12.RELEASE
2.2.6.RELEASE Spring Cloud Hoxton.SR9 2.3.2.RELEASE
2.2.1.RELEASE Spring Cloud Hoxton.SR3 2.2.5.RELEASE
2.2.0.RELEASE Spring Cloud Hoxton.RELEASE 2.2.X.RELEASE
2.1.4.RELEASE Spring Cloud Greenwich.SR6 2.1.13.RELEASE
2.1.2.RELEASE Spring Cloud Greenwich 2.1.X.RELEASE
2.0.4.RELEASE(停止维护,建议升级) Spring Cloud Finchley 2.0.X.RELEASE
1.5.1.RELEASE(停止维护,建议升级) Spring Cloud Edgware 1.5.X.RELEASE

组件版本关系

每个 Spring Cloud Alibaba 版本及其自身所适配的各组件对应版本如下表所示(注意,Spring Cloud Dubbo 从 2021.0.1.0 起已被移除出主干,不再随主干演进):

Spring Cloud Alibaba Version Sentinel Version Nacos Version RocketMQ Version Dubbo Version Seata Version
2022.0.0.0 1.8.6 2.2.1 4.9.4 ~ 1.7.0
2022.0.0.0-RC2 1.8.6 2.2.1 4.9.4 ~ 1.7.0-native-rc2
2021.0.5.0 1.8.6 2.2.0 4.9.4 ~ 1.6.1
2.2.10-RC1 1.8.6 2.2.0 4.9.4 ~ 1.6.1
2022.0.0.0-RC1 1.8.6 2.2.1-RC 4.9.4 ~ 1.6.1
2.2.9.RELEASE 1.8.5 2.1.0 4.9.4 ~ 1.5.2
2021.0.4.0 1.8.5 2.0.4 4.9.4 ~ 1.5.2
2.2.8.RELEASE 1.8.4 2.1.0 4.9.3 ~ 1.5.1
2021.0.1.0 1.8.3 1.4.2 4.9.2 ~ 1.4.2
2.2.7.RELEASE 1.8.1 2.0.3 4.6.1 2.7.13 1.3.0
2.2.6.RELEASE 1.8.1 1.4.2 4.4.0 2.7.8 1.3.0
2021.1 or 2.2.5.RELEASE or 2.1.4.RELEASE or 2.0.4.RELEASE 1.8.0 1.4.1 4.4.0 2.7.8 1.3.0
2.2.3.RELEASE or 2.1.3.RELEASE or 2.0.3.RELEASE 1.8.0 1.3.3 4.4.0 2.7.8 1.3.0
2.2.1.RELEASE or 2.1.2.RELEASE or 2.0.2.RELEASE 1.7.1 1.2.1 4.4.0 2.7.6 1.2.0
2.2.0.RELEASE 1.7.1 1.1.4 4.4.0 2.7.4.1 1.0.0
2.1.1.RELEASE or 2.0.1.RELEASE or 1.5.1.RELEASE 1.7.0 1.1.4 4.4.0 2.7.3 0.9.0
2.1.0.RELEASE or 2.0.0.RELEASE or 1.5.0.RELEASE 1.6.3 1.1.1 4.4.0 2.7.3 0.7.1

IDEA

常用快捷键

  • 查询快捷键
    CTRL+N 查找类
    CTRL+SHIFT+N 查找文件
    CTRL+SHIFT+ALT+N 查 找类中的方法或变量
    CIRL+B 找变量的来源
    CTRL+ALT+B 找所有的子类
    CTRL+SHIFT+B 找变量的 类
    CTRL+G 定位行
    CTRL+F 在当前窗口查找文本
    CTRL+SHIFT+F 在指定窗口查找文本
    CTRL+R 在 当前窗口替换文本
    CTRL+SHIFT+R 在指定窗口替换文本
    ALT+SHIFT+C 查找修改的文件
    CTRL+E 最 近打开的文件
    F3 向下查找关键字出现位置
    SHIFT+F3 向上一个关键字出现位置
    F4 查找变量来源
    CTRL+ALT+F7 选 中的字符 查找工程出现的地方
    CTRL+SHIFT+O 弹出显示查找内容
  • 自动代码
    ALT+回车 导入包,自动修正
    CTRL+ALT+L 格式化代码
    CTRL+ALT+I 自 动缩进
    CTRL+ALT+O 优化导入的类和包
    ALT+INSERT 生成代码(如 GET,SET 方法,构造函数等)
    CTRL+E 或者 ALT+SHIFT+C 最近更改的代码
    CTRL+SHIFT+SPACE 自动补全代码
    CTRL+空格 代码提示
    CTRL+ALT+SPACE 类 名或接口名提示
    CTRL+P 方法参数提示
    CTRL+J 自动代码
    CTRL+ALT+T 把选中的代码放在 TRY{} IF{} ELSE{} 里
  • 复制快捷方式
    F5 拷贝文件快捷方式
    CTRL+D 复制行
    CTRL+X 剪 切,删除行
    CTRL+SHIFT+V 可以复制多个文本
  • 高亮
    CTRL+F 选中的文字,高亮显示 上下跳到下一个或者上一个
    F2 或 SHIFT+F2 高亮错误或警告快速定位
    CTRL+SHIFT+F7 高亮显示多个关键字.
  • 其他快捷方式
    CTRL+SHIFT+U 大小写切换
    CTRL+Z 倒退
    CTRL+SHIFT+Z 向 前
    CTRL+ALT+F12 资源管理器打开文件夹
    ALT+F1 查找文件所在目录位置
    SHIFT+ALT+INSERT 竖 编辑模式
    CTRL+/ 注释//
    CTRL+SHIFT+/ 注释/…/
    CTRL+W 选中代码,连续按会 有其他效果
    CTRL+B 快速打开光标处的类或方法
    ALT+ ←/→ 切换代码视图
    CTRL+ALT ←/→ 返回上次编辑的位置
    ALT+ ↑/↓ 在方法间快速移动定位
    SHIFT+F6 重构-重命名
    CTRL+H 显 示类结构图
    CTRL+Q 显示注释文档
    ALT+1 快速打开或隐藏工程面板
    CTRL+SHIFT+UP/DOWN 代码 向上/下移动。
    CTRL+UP/DOWN 光标跳转到第一行或最后一行下
    ESC 光标返回编辑框
    SHIFT+ESC 光 标返回编辑框,关闭无用的窗口

出国留学

三毛
飞鸟
911Cloud
一云

参考

  1. centos7安装Docker详细步骤(无坑版教程)-腾讯云开发者社区-腾讯云
  2. 使用Docker搭建MySQL主从复制(一主一从)_使用docker搭建mysql主从复制(一主一从)-CSDN博客
  3. 前端研发需要知道的Docker-腾讯云开发者社区-腾讯云
  4. Docker启动安装nacos(详情讲解,全网最细)_docker启动nacos-CSDN博客
  5. Docker部署RocketMQ4.x-腾讯云开发者社区-腾讯云
  6. 2023最新青龙面板京东脚本库(11月3日,持续更新中)-CSDN博客
  7. 02. 青龙面板应用——安装依赖拉取仓库运行京东脚本(保姆级图文)_青龙面板脚本库-CSDN博客
  8. Crawlab 快速开始
  9. IDEA常用快捷键集合(详解) - 知乎
  10. GitHub - kefranabg/readme-md-generator: 📄 CLI that generates beautiful README.md files

各种环境配置
https://alleyf.github.io/2023/11/59c10a08fce4.html
作者
fcs
发布于
2023年11月18日
更新于
2024年11月10日
许可协议