#####################################

副本集管理命令大全:

rs.add()    为复制集新增节点。

rs.addArb()    为复制集新增一个 arbiter

rs.conf()    返回复制集配置信息

rs.freeze()    防止当前节点在一段时间内选举成为主节点。

rs.help()    返回 replica set 的命令帮助

rs.initiate()    初始化一个新的复制集。

rs.printReplicationInfo()    以主节点的视角返回复制的状态报告。

rs.printSlaveReplicationInfo()    以从节点的视角返回复制状态报告。

rs.reconfig()    通过重新应用复制集配置来为复制集更新配置。

rs.remove()    从复制集中移除一个节点。

rs.slaveOk()    为当前的连接设置 slaveOk 。不推荐使用。使用 readPref() 和 Mongo.setReadPref() 来设置 read preference 。

rs.status()    返回复制集状态信息。

rs.stepDown()    让当前的 primary 变为从节点并触发 election 。

rs.syncFrom()    设置复制集节点从哪个节点处同步数据,将会覆盖默认选取逻辑。

 

 

 过程1:

副本集成员实例的keyfile文件要求:每个实例的keyfile文件内容完全相同,keyfile文件的权限为0600,keyfile文件的所属组合所属主均为work用户:安装mongodb和生成相同keyfile文件:

如keyfile文件名称为test:

[
work@xxx etc]$ ll total 8 -rw-rw-r-- 1 work work 1228 Nov 27 11:23 mongodb.conf -rw------- 1 work work 1004 Nov 27 10:14 test

 

过程2:

副本集成员均先不要授权,即注释掉授权配置,删掉data目录(dbpath)所有内容,然后启动所有成员实例:去掉授权检测,删掉所有数据,启动数据库,只需在主库上操作,其他从库则不需要去掉授权检测,直接删除data目录,保证keyfile文件相同即可

过程3:

只在其中一个实例上执行:rs.initiate() ,千万别在多个成员上执行,然后再该实例上执行:rs.add(hostportstr),来添加其他实例成员:初始化副本集,添加成员:

过程4:

然后再主库的admin库下创建超级管理员:创建用户:

db.createUser({user:'mongo_dba',pwd:'123456',roles:['root']})

 

过程5:

开启配置文件中的授权,关闭实例,再启动实例:开启授权,重启实例:

 

 

###################################################################################################################

01 新加的节点一定不要再执行rs.initiate()了,即一个集群中,只有一个实例执行该方法,否则,无法构建主从;

 

只需不要在两个服务器上都初始化复制即可

 

 初始化副本集时:

1,关闭security配置,如下:

 

systemLog:
  destination: file
  path: /home/work/mongodb/mongo_28008/log/mongodb.log
  logAppend: true

#net Options
net:
  maxIncomingConnections: 10240
  port: 28008
  bindIp: 10.10.10.10,localhost
  serviceExecutor : adaptive

#security Options
#security:
#  authorization: 'enabled'
#  keyFile: /home/work/mongodb/mongo_28008/etc/test
#  clusterAuthMode: "keyFile"

#storage Options
storage:
  engine: "wiredTiger"
  directoryPerDB: true
  dbPath: /home/work/mongodb/mongo_28008/data
  indexBuildRetry: true
  journal:
    enabled: true
    commitIntervalMs: 100
  wiredTiger:
    engineConfig:
      directoryForIndexes: true
      cacheSizeGB: 60
      journalCompressor: "snappy"
    collectionConfig:
      blockCompressor: "snappy"
    indexConfig:
      prefixCompression: true
    #wiredTigerCollectionConfigString: lsm
    #wiredTigerIndexConfigString: lsm

#replication Options
replication:
  oplogSizeMB: 2048 #2GB
  replSetName: test

#operationProfiling Options
operationProfiling:
  slowOpThresholdMs: 100
  mode: "slowOp"

processManagement:
  fork: true
  pidFilePath: /home/work/mongodb/mongo_28008/tmp/mongo_28008.pid

 

2,需要在admin数据库下创建管理员账号,否则不成功,如下:

 

[work@hostname mongo_28008]$ /home/work/mongodb/3.6/bin/mongo  --port 28008
Percona Server for MongoDB shell version v3.6.17-4.0
connecting to: mongodb://127.0.0.1:28008/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("03494baa-ad28-4ebe-89ea-ca3c6d523f90") }
Percona Server for MongoDB server version: v3.6.17-4.0
Welcome to the Percona Server for MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        https://www.percona.com/doc/percona-server-for-mongodb
Questions? Try the support group
        https://www.percona.com/forums/questions-discussions/percona-server-for-mongodb
Server has startup warnings: 
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] 
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] 
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] ** WARNING: The configured WiredTiger cache size is more than 80% of available RAM.
2020-10-28T15:33:16.353+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/faq-memory-diagnostics-wt
2020-10-28T15:33:17.154+0800 I CONTROL  [initandlisten] 
2020-10-28T15:33:17.154+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-10-28T15:33:17.154+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-10-28T15:33:17.154+0800 I CONTROL  [initandlisten] **          You can use percona-server-mongodb-enable-auth.sh to fix it.
2020-10-28T15:33:17.154+0800 I CONTROL  [initandlisten] 
> rs.initiate()
{
        "info2" : "no configuration specified. Using a default configuration for the set",
        "me" : "10.10.10.10:28008",
        "ok" : 1,
        "operationTime" : Timestamp(1603870486, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1603870486, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
test:SECONDARY> db.createUser({user:'mongo_dba',pwd:'123456',roles:['root']})
2020-10-28T15:34:55.470+0800 E QUERY    [thread1] Error: couldn't add user: No role named root@test :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1437:15
@(shell):1:1
test:PRIMARY> db.createUser({user:'mongo_dba',pwd:'123456',roles:['root']})
2020-10-28T15:35:04.103+0800 E QUERY    [thread1] Error: couldn't add user: No role named root@test :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1437:15
@(shell):1:1
test:PRIMARY> use admin
switched to db admin
test:PRIMARY> db.createUser({user:'mongo_dba',pwd:'123456',roles:['root']})
Successfully added user: { "user" : "mongo_dba", "roles" : [ "root" ] }
test:PRIMARY> 

 

 

 3,关闭实例,开启security配置,如下:

 

[work@hostname mongo_28008]$ ps aux|grep mongod
work     19967  0.0  0.0 103244   848 pts/1    S+   15:40   0:00 grep mongod
work     31347  2.0  0.3 1627896 58348 ?       SLl  15:33   0:08 /home/work/mongodb/3.6/bin/mongod --config /home/work/mongodb/mongo_28008/etc/mongodb.conf
[work@hostname mongo_28008]$ kill 31347
[work@hostname mongo_28008]$ vim /home/work/mongodb/mongo_28008/etc/mongodb.conf
systemLog:
  destination: file
  path: /home/work/mongodb/mongo_28008/log/mongodb.log
  logAppend: true

#net Options
net:
  maxIncomingConnections: 10240
  port: 28008
  bindIp: 10.38.10.10,localhost
  serviceExecutor : adaptive

#security Options
security:
  authorization: 'enabled'
  keyFile: /home/work/mongodb/mongo_28008/etc/test
  clusterAuthMode: "keyFile"

#storage Options
storage:
  engine: "wiredTiger"
  directoryPerDB: true
  dbPath: /home/work/mongodb/mongo_28008/data
  indexBuildRetry: true
  journal:
    enabled: true
    commitIntervalMs: 100
  wiredTiger:
    engineConfig:
      directoryForIndexes: true
      cacheSizeGB: 60
      journalCompressor: "snappy"
    collectionConfig:
      blockCompressor: "snappy"
    indexConfig:
      prefixCompression: true
    #wiredTigerCollectionConfigString: lsm
    #wiredTigerIndexConfigString: lsm

#replication Options
replication:
  oplogSizeMB: 2048 #2GB
  replSetName: test

#operationProfiling Options
operationProfiling:
  slowOpThresholdMs: 100
  mode: "slowOp"

processManagement:
  fork: true
  pidFilePath: /home/work/mongodb/mongo_28008/tmp/mongo_28008.pid



 

4,开启服务

[work@hostname mongo_28008]$ /home/work/mongodb/3.6/bin/mongod --config /home/work/mongodb/mongo_28008/etc/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 20145
child process started successfully, parent exiting

 

 

######################

配置解读:

 

storage:
  engine: "wiredTiger"
  directoryPerDB: true
  dbPath: /home/work/mongodb/mongo_28008/data  # string类型。默认值/data/db在Linux和macOS上,\data\db在Windows上,这里,咱们一般放在实例部署的位置,而不是默认位置:mongod实例存储其数据的目录。该storage.dbPath设置仅适用于mongod。 indexBuildRetry: true journal: enabled: true  # boolean类型。默认值true在64位系统上,false在32位系统上。启用或禁用持久性日志以确保数据文件保持有效和可恢复。此选项仅在您指定storage.dbPath设置时适用 。mongod默认情况下启用日记功能。 commitIntervalMs: 100  # 数字类型。默认值:100或30。版本3.2中的新功能。mongod进程允许在日志操作之间的最长时间(以毫秒为单位)。值的范围为1到500毫秒。较低的值会增加日志的持久性,但会牺牲磁盘性能。默认日记帐提交间隔为100毫秒。 wiredTiger: engineConfig: directoryForIndexes: true cacheSizeGB: 60 # float类型。WiredTiger将用于所有数据的内部缓存的最大大小。在版本3.4中更改:值的范围可以从256MB到10TB,并且可以是浮点数。此外,默认值也已更改。 journalCompressor: "snappy" collectionConfig: blockCompressor: "snappy" indexConfig: prefixCompression: true #wiredTigerCollectionConfigString: lsm #wiredTigerIndexConfigString: lsm

 

给副本集创建keyfile让集群成员之间通过认证:

创建keyFile文件:

keyFile文件的作用: 集群之间的安全认证,增加安全认证机制KeyFile(开启keyfile认证就默认开启了auth认证了,为了保证后面可以登录,我已创建了用户)

  (1):openssl rand -base64 765 > /root/mongodb/keyfile    

 其中765是文件大小           /root/mongodb/keyfile : 文件存放路径 

  (2):该key的权限必须是0600或0400

   chmod 0600 /root/mongodb/keyfile  

注意:创建keyFile前,需要先停掉副本集中所有主从节点的mongod服务(systemctl stop mongodb.service),然后再创建,否则有可能出现服务启动不了的情况。

    mongodb集群有自动切换主库功能,如果先关主库,主库就切换到其它上面去了,这里预防主库变更,从库关闭后再关闭主库

 (3):将主节点中的keyfile文件拷贝到副本集其他从节点服务器中,路径地址对应mongo.conf配置文件中的keyFile字段地址。并设置keyfile权限为0600

 

 

副本集成员配置:

glc:PRIMARY> rs.conf()
{
        "_id" : "glc",
        "version" : 2,
        "protocolVersion" : NumberLong(1),
        "writeConcernMajorityJournalDefault" : true,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "xxx:27076",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "yyy:27076",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "zzz:27076",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : true,
                        "priority" : 0,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(21600),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : -1,
                "catchUpTakeoverDelayMillis" : 30000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("5fd6e787187fc38ca7584ed4")
        }
}

 

一、优先级配置:

在设置mongodb副本集时,Primary节点,second节点,仲裁节点,有可能资源配置(CPU或者内存)不均衡,所以要求某些节点不能成为Primary
我们知道mongodb的设置:

  1.   除了仲裁节点,其他每个节点都有个优先权,可以手动设置优先权来决定谁的成为primay的权重最大。
  2.   副本集中通过设置priority的值来决定优先权的大小,这个值的范围是0--100,值越大,优先权越高。
  3. 默认的值是1,如果值是0,那么不能成为primay。
  4. 规划时直接设置,这个就略过了。
  5. 在线加入的节点配置:

配置过程:通过修改priority的值来实现,默认的优先级是1,取值范围是:0-100,priority的值设的越大,就优先成为主。

注意:第2步members大括号里面的成员和_id是没有关系的,而是rs.conf查出来节点的数值的顺序;这些操作必须在Primary上进程。

 

  •  我们通过设置不同的优先级来提高部分节点成为主节点的可能性,也可以让某些节点不能成为主节点。复制集节点的 <tt class="xref mongodb mongodb&#45;data docutils literal">priority</tt> 参数的值决定了选举中该节点的优先级。值越高,优先级越高。
  • 在维护视窗时间内修改优先级。修改优先级会使主节点降职并触发选举。在选举前,主节点将关闭所有已有连接。
  • 我们可以通过修改复制集配置参数中 <tt class="xref mongodb mongodb&#45;data docutils literal">members</tt> 数组位置的优先级来修改对应机器的优先级。数组索引从 <tt class="docutils literal">0</tt> 开始。不要将数组下标与数组 <tt class="xref mongodb mongodb&#45;data docutils literal">_id</tt> 混淆。
  • MongoDB不能将当前的 primary 的优先级设置为 <tt class="docutils literal">0</tt> 。为了防止现有的主节点再次成为主节点,我们需要先使用 <tt class="xref mongodb mongodb&#45;method docutils literal">rs.stepDown()</tt> 来将主节点降职。

 

举例说明:特别要注意的是:members[下标],下标这个值始终从0开始算起,与members数组中每个元素的_id毫无关系,不要搞错了!!!!!!!!!

 

cfg = rs.conf()

cfg.members[0].priority = 0

cfg.members[1].priority = 0.5

cfg.members[2].priority = 1

cfg.members[3].priority = 2

rs.reconfig(cfg)

 

禁止从节点升为primary节点的可能:

 

cfg = rs.conf()

cfg.members[2].priority = 0

rs.reconfig(cfg)

 

 将某个指定节点升为主库primary:也可以将不能作为主库的实例先rs.remove(),切换之后再rs.add()也可以

 

cfg = rs.conf()

cfg.members[0].priority = 1

cfg.members[1].priority = 1

cfg.members[2].priority = 5

rs.reconfig(cfg)

如上操作将2号节点提升为主库primary,执行了rs.reconfig(cfg)后,将会立马进行选举:

 

 

 

 

 

 二、配置隐藏节点:

 

cfg = rs.conf()

cfg.members[0].priority = 0 cfg.members[0].hidden = true rs.reconfig(cfg)

 

说明:

  1. 将优先级设置为最低值0的目的:防止被选举为master节点。
  2. 将其配置为hihhen节点的目的:对应用程序不可见,即程序的读写流量都不会打到该节点上。

             特别地,当业务开启了读写分离,如果采取了通过普通同步数据的方式来新增一台mongod从库的话,那么一定要记得在主库执行了rs.add("hostname:port")后,

需要将该新增的实例配置为隐藏节点或延迟节点,不然业务会将流量打到该实例而报错,直到数据完全同步了,便可改回来。

 以下是新增实例后忘记设置为隐藏节点或延迟节点情况下业务的报错信息:

10:16
2021-03-16 10:15:24,093|INFO |cluster-ClusterId{value='4023aa92897e9155d7e40602', description='null'}-xxx:28011|org.mongodb.driver.cluster|
|
Exception in monitor thread while connecting to server xxx:28011 com.mongodb.MongoCommandException: Command failed with error 211 (KeyNotFound): 'Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1615860924, 2) } with id: 6888689310090919952'

on server xxx:28011. The full response is {"ok": 0.0, "errmsg": "Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1615860924, 2) }

with id: 6888689310090919952", "code": 211, "codeName": "KeyNotFound"} at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175) at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:303) at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259) at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83) at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:38) at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:180) at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:124) at java.lang.Thread.run(Thread.java:748)

 

 

 

 

 三、配置延迟节点:

  1. 优先级必须设置为0,防止延时节点被选为主节点。
  2. 应该把延时节点设置为隐藏节点,这样客户端在做从服务器读操作时,请求不会被分发到延时节点。
  3. 在发生选举时,该节点可以投票。
  4. 延时节点通过延时从opLog同步数据来实现,因此延时时长的设置需要考虑两个因素:不小于维护时间窗口。比opLog的容量要小一些,不然会不能同步而进入stale状态。

 

 举例说明:

cfg = rs.conf()

cfg.members
[0].priority = 0
cfg.members
[0].hidden = true
cfg.members
[0].slaveDelay = 3600
rs.reconfig(cfg)

 

 

 

 

 如果删除mongodb的数据目录会报错:

2021-02-02T12:04:43.736+0800 I STORAGE  [initandlisten] exception in initAndListen: NonExistentPath: Data directory /home/work/mongodb/mongo_28042/data not found., terminating
2021-02-02T12:04:43.736+0800 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2021-02-02T12:04:43.736+0800 I NETWORK  [initandlisten] removing socket file: /tmp/mongodb-28042.sock
2021-02-02T12:04:43.736+0800 I CONTROL  [initandlisten] now exiting
2021-02-02T12:04:43.736+0800 I CONTROL  [initandlisten] shutting down with code:100

 

 

 通过initial sync来直接新增一个mongdb副本集的成员:

 

ssd固态硬盘的情况下,每分钟能备份8G数据;

 

 

 更改指定副本集从库的复制源:

先登录指定的副本集从库,然后更改复制源:

glc:SECONDARY> rs.syncFrom("xxx:28042");
{
        "syncFromRequested" : "xxx:28042",
        "prevSyncTarget" : "yyy:28042",
        "ok" : 1,
        "operationTime" : Timestamp(1612342134, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1612342134, 1),
                "signature" : {
                        "hash" : BinData(0,"f7Bx8Jqa5QIfu0ZzOG9eBe3+KmU="),
                        "keyId" : NumberLong("6919692836459773955")
                }
        }
}
glc:SECONDARY> 


# 显示将原来的yyy:28042同步源更改为xxx:28042同步源:

查看同步源信息:

glc:PRIMARY> rs.status().members
[
        {
                "_id" : 1,
                "name" : "xxx:28042",
                "health" : 1,
                "state" : 1,
                "stateStr" : "PRIMARY",
                "uptime" : 1219104,
                "optime" : {
                        "ts" : Timestamp(1612342939, 57),
                        "t" : NumberLong(8)
                },
                "optimeDate" : ISODate("2021-02-03T09:02:19Z"),
                "syncingTo" : "",
                "syncSourceHost" : "",
                "syncSourceId" : -1,
                "infoMessage" : "",
                "electionTime" : Timestamp(1612256361, 1),
                "electionDate" : ISODate("2021-02-02T08:59:21Z"),
                "configVersion" : 15,
                "self" : true,
                "lastHeartbeatMessage" : ""
        },
        {
                "_id" : 4,
                "name" : "xxx:28042",
                "health" : 1,
                "state" : 2,
                "stateStr" : "SECONDARY",
                "uptime" : 1999,
                "optime" : {
                        "ts" : Timestamp(1612342939, 57),
                        "t" : NumberLong(8)
                },
                "optimeDurable" : {
                        "ts" : Timestamp(1612342939, 57),
                        "t" : NumberLong(8)
                },
                "optimeDate" : ISODate("2021-02-03T09:02:19Z"),
                "optimeDurableDate" : ISODate("2021-02-03T09:02:19Z"),
                "lastHeartbeat" : ISODate("2021-02-03T09:02:28.433Z"),
                "lastHeartbeatRecv" : ISODate("2021-02-03T09:02:29.351Z"),
                "pingMs" : NumberLong(0),
                "lastHeartbeatMessage" : "",
                "syncingTo" : "xxx:28042",
                "syncSourceHost" : "xxx:28042",
                "syncSourceId" : 1,
                "infoMessage" : "",
                "configVersion" : 15
        }
]
glc:PRIMARY> 


# "syncingTo" : "xxx:28042",

# "syncSourceHost" : "xxx:28042",



 

 

 服务端配置写入数据策略:

cfg = rs.conf()

cfg.settings.getLastErrorDefaults = { w: "majority", wtimeout: 5000 }

rs.reconfig(cfg)

 

 

客户端配置写入策略:

也可以通过客户端来指定具体的策略,如下: 至少要写入两个节点,超时时间是 5s

 
db.products.insert(
{ item: "envelopes", qty :
100, type: "Clasp" },
{ writeConcern: { w:
2, wtimeout: 5000 } }
)

 

 

###########################