puppet MCollective 命令行使用

【puppet MCollective常用命令行 】

 #mco inventory hotpu

Inventory for hotpu:

Server Statistics:
Version: 2.8.4
Start Time: Mon May 30 18:53:09 +0800 2016
Config File: /etc/mcollective/server.cfg
Collectives: mcollective
Main Collective: mcollective
Process ID: 2260
Total Messages: 1
Messages Passed Filters: 1
Messages Filtered: 0
Expired Messages: 0
Replies Sent: 0
Total Processor Time: 91.79 seconds
System Time: 72.83 seconds

Agents:
discovery filemgr nrpe
package puppet rpcutil
service shell

………………

#mco inventory hotpu | awk '/Facts:/','/^$/'
Facts:
architecture => x86_64
augeasversion => 1.0.0
bios_release_date => 08/10/2007
bios_vendor => Dell Inc.
bios_version => 1.5.1

………………

 #mco facts hostname
Report for fact: hostname

computer001   found 1 times
computer002  found 1 times
computer003  found 1 times

cat inventory.mc

inventory do
format "%20s %8s %10s %-20s"
fields {[ identity, facts["architecture"],facts["operatingsystem"], facts["operatingsystemrelease"]
]}
end

#mco inventory --script inventory.mc
hotpu x86_64 CentOS 6.6
computer001 x86_64 CentOS 6.5
computer002 x86_64 CentOS 6.7
computer003 x86_64 CentOS 6.7

#mco plugin doc mc
mc
==

MCollective Broadcast based discovery

Author: R.I.Pienaar <rip@devco.net>
Version: 0.1
License: ASL 2.0
Timeout: 2
Home Page: http://marionette-collective.org/

DISCOVERY METHOD CAPABILITIES:
Filter based on configuration management classes
Filter based on system facts
Filter based on mcollective identity
Filter based on mcollective agents
Compound filters combining classes and facts

#echo "computer001" > host && mco rpc rpcutil ping --disc-method flatfile --disc-option ./host
Discovering hosts using the flatfile method .... 1

* [ ============================================================> ] 1 / 1
computer001
Timestamp: 1464848863

Finished processing 1 / 1 hosts in 29.80 ms

 #mco find --with-identity /c/
mco find --with-fact operatingsystem=CentOS 

#查看操作系统为CentOS的MC 服务端机器列表

mco find --with-agent package 
mco ping --select "operatingsystem=CentOS and !environment=dev"
 mco facts osfamily --limit 5 --with-class base
mco package status sudo --batch 10 --batch-sleep 20

#查看软件包状态

puppet MCollective 插件安装

【puppet MCollective  插件安装】

MCollective  插件安装常见有两种安装方式:

  •   在线使用yum安装

添加puppet 官方的yum源,然后使用yum来安装,示例:

yum install mcollective-filemgr-agent mcollective-filemgr-client  mcollective-filemgr-common  mcollective-iptables-agent mcollective-iptables-client
  •   离线安装方式a.

要求某台机器安装有如下软件包:rpm-build git的工具,本机已经安装MCollective package 插件,使用这些工具将原来下载下来的源文件打包成rpm格式,再作分发安装。 Mcollectvie 插件过程安装如示例:

 #git clone https://github.com/cegeka/mcollective-shell-agent #克隆git仓库
cd mcollective-shell-agent
[root@hotpu mcollective-shell-agent]# mco plugin package .
[root@hotpu mcollective-shell-agent]# ls -l *.rpm
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-common-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-client-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-agent-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]

  •   离线安装方式b

将下载目录的对应文件复制到:/usr/libexec/mcollective/mcollective/ 对应目录下即可。

安装完插件后,需要重新启动mcollective 服务。

puppet ActiveMQ MCollective CentOS6 安装笔记

Puppet 在CentOS6 上安装activemq mcollective 笔记

安装activemq mcollective 环境

rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
yum -y install java-1.8.0-openjdk activemq

编辑activemq的配置文件:/etc/activemq/activemq.xml

cat /etc/activemq/activemq.xml|grep -v "^$"
<beans
 xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
 http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
 <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
 <property name="locations">
 <value>file:${activemq.conf}/credentials.properties</value>
 </property>
 </bean>
 <broker xmlns="http://activemq.apache.org/schema/core" brokerName="computer001" dataDirectory="${activemq.data}" schedulePeriodForDestinationPurge="60000">
 <destinationPolicy>
 <policyMap>
 <policyEntries>
 <policyEntry topic=">" >
 <pendingMessageLimitStrategy>
 <constantPendingMessageLimitStrategy limit="1000"/>
 </pendingMessageLimitStrategy>
 </policyEntry>
 </policyEntries>
 </policyMap>
 </destinationPolicy>
 <managementContext>
 <managementContext createConnector="false"/>
 </managementContext>
 <persistenceAdapter>
 <kahaDB directory="${activemq.data}/kahadb"/>
 </persistenceAdapter>
 <systemUsage>
 <systemUsage>
 <memoryUsage>
 <memoryUsage percentOfJvmHeap="70" />
 </memoryUsage>
 <storeUsage>
 <storeUsage limit="100 gb"/>
 </storeUsage>
 <tempUsage>
 <tempUsage limit="50 gb"/>
 </tempUsage>
 </systemUsage>
 </systemUsage>
<plugins>
 <simpleAuthenticationPlugin>
 <users>
 <authenticationUser username="client" password="client_password" groups="servers,clients,everyone"/>
 <authenticationUser username="server" password="server_password" groups="servers,everyone"/>
 </users>
 </simpleAuthenticationPlugin>
 
 <authorizationPlugin>
 <map>
 <authorizationMap>
 <authorizationEntries>
 <authorizationEntry queue="mcollective.>" write="clients" read="clients" admin="clients" />
 <authorizationEntry topic="mcollective.>" write="clients" read="clients" admin="clients" />
 <authorizationEntry queue="mcollective.nodes" read="servers" admin="servers" />
 <authorizationEntry queue="mcollective.reply.>" write="servers" admin="servers" />
 <authorizationEntry topic="mcollective.*.agent" read="servers" admin="servers" />
 <authorizationEntry topic="mcollective.registration.agent" write="servers" read="servers" admin="servers" />
 <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
 </authorizationEntries>
 </authorizationMap>
 </map>
 </authorizationPlugin>
</plugins>
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/> 
 </managementContext>
 <transportConnectors>
 <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 </transportConnectors>
 <shutdownHooks>
 <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
 </shutdownHooks>
 </broker>
 <import resource="jetty.xml"/>
</beans>

配置文件,请参考:https://github.com/jorhett/learning-mcollective/blob/master/examples/activemq_59.xml

 

配置actvemq web控制台


cat /etc/activemq/jetty.xml |grep authenticate
<property name="authenticate" value="true" /> 将false 改为true,即可

启动activemq 服务

/etc/init.d/activemq start 

MC服务端安装 


yum install mcollective -y

MC服务端配置文件 


cat /etc/mcollective/server.cfg

daemonize = 1
direct_addressing = 1
# ActiveMQ connector settings:
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = 192.168.200.51 #填写activemq的IP地址或者dns,使用dns需要解析
plugin.activemq.pool.1.port = 61613 &amp;nbsp; #端口
plugin.activemq.pool.1.user = server # 用户名
plugin.activemq.pool.1.password = server_password #密码
plugin.activemq.heartbeat_interval = 30 &amp;nbsp;#心跳间隔
 
# How often to send registration messages
registerinterval = 600
 
# Plugins
securityprovider = psk
plugin.psk = psk_password
 
#
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log
loglevel = info


 

启动MC server服务

/etc/init.d/mcollective  start

注:每台MC 服务端都需要安装,并确保每台机器时间都是同步。

MC客户端安装

yum -y install mcollective-client

注:仅客户端需要安装。

MC客户端配置

cat /etc/mcollective/client.cfg

daemonize = 1
direct_addressing = 1
# ActiveMQ connector settings:
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = 192.168.200.51
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = client
plugin.activemq.pool.1.password = client_password
plugin.activemq.heartbeat_interval = 30
 
# How often to send registration messages
registerinterval = 600
 
# Plugins
securityprovider = psk
plugin.psk = psk_password
 
#
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log

ttl = 60
color = 1
rpclimitmethod = first

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml


MC客户端测试

 # mco ping

computer001 time=70.62 ms

computer002 time=30.12 ms

computer003 time=30.02 ms

此blog都是以前个人笔记,不保证时效性,更多请继续关注puppetfans文章!