puppet vim 使用

在Linux下环境我们经常使用编辑器vim来书写.pp文件,使用vim的相关插件可以让puppet关键词高亮,并且格式化的puppet语法,让代码看起来更漂亮,美观。

链接: http://pan.baidu.com/s/1qY4Cpgo  密码: avmv

下载附件:vim.tgz

解压:tar zxvf vim.tgz

将解压后的.vim文件放到用户家目录,即可。

效果如下:

puppet-vim

 

 

 

推荐学习puppet-电子书pdf格式

推荐puppet 的入门同学 ,必看书籍,由于puppet官方更新速度比较快,目前主流的puppet3.X版本,4.X ,以及老的2.X版本之间有些不同,这些电子书都是我之前购买的。推荐大家多阅读,pdf都是英文版的,要有耐心。

Puppet Cookbook – Third Editio:

puppet cookbook

Mastering Puppet – Second Edition:  主要内容如下:

master puppet

Puppet Essentials  : 主要内容如下:

Extending Puppet: 主要内容如下:

 

PUPPET_4_ESSENTIALS_SECOND_EDITION: Puppet 4.X版本的些介绍

Learning MCollective: 学习MCollective

Learning_Puppet_Security: 学习Puppet 安全

Troubleshooting Puppet :调试puppet技巧

如有需要的同学,随心、随愿,联系:puppetfans,请发邮件至puppetfans@163.com,微信可加二维码

 

 

 

 

 

puppet MCollective 命令行使用

【puppet MCollective常用命令行 】

 #mco inventory hotpu

Inventory for hotpu:

Server Statistics:
Version: 2.8.4
Start Time: Mon May 30 18:53:09 +0800 2016
Config File: /etc/mcollective/server.cfg
Collectives: mcollective
Main Collective: mcollective
Process ID: 2260
Total Messages: 1
Messages Passed Filters: 1
Messages Filtered: 0
Expired Messages: 0
Replies Sent: 0
Total Processor Time: 91.79 seconds
System Time: 72.83 seconds

Agents:
discovery filemgr nrpe
package puppet rpcutil
service shell

………………

#mco inventory hotpu | awk '/Facts:/','/^$/'
Facts:
architecture => x86_64
augeasversion => 1.0.0
bios_release_date => 08/10/2007
bios_vendor => Dell Inc.
bios_version => 1.5.1

………………

 #mco facts hostname
Report for fact: hostname

computer001   found 1 times
computer002  found 1 times
computer003  found 1 times

cat inventory.mc

inventory do
format "%20s %8s %10s %-20s"
fields {[ identity, facts["architecture"],facts["operatingsystem"], facts["operatingsystemrelease"]
]}
end

#mco inventory --script inventory.mc
hotpu x86_64 CentOS 6.6
computer001 x86_64 CentOS 6.5
computer002 x86_64 CentOS 6.7
computer003 x86_64 CentOS 6.7

#mco plugin doc mc
mc
==

MCollective Broadcast based discovery

Author: R.I.Pienaar <rip@devco.net>
Version: 0.1
License: ASL 2.0
Timeout: 2
Home Page: http://marionette-collective.org/

DISCOVERY METHOD CAPABILITIES:
Filter based on configuration management classes
Filter based on system facts
Filter based on mcollective identity
Filter based on mcollective agents
Compound filters combining classes and facts

#echo "computer001" > host && mco rpc rpcutil ping --disc-method flatfile --disc-option ./host
Discovering hosts using the flatfile method .... 1

* [ ============================================================> ] 1 / 1
computer001
Timestamp: 1464848863

Finished processing 1 / 1 hosts in 29.80 ms

 #mco find --with-identity /c/
mco find --with-fact operatingsystem=CentOS 

#查看操作系统为CentOS的MC 服务端机器列表

mco find --with-agent package 
mco ping --select "operatingsystem=CentOS and !environment=dev"
 mco facts osfamily --limit 5 --with-class base
mco package status sudo --batch 10 --batch-sleep 20

#查看软件包状态

puppet tagged 用法

【puppet tagged 用法】

puppet tag 官方参考文档:https://docs.puppet.com/puppet/latest/reference/lang_tags.html

  • tag元参数,用于标签资源;
  • tag函数,用于标签容器;
  • tagged函数,用于判断一个容器,是否有某个标签,即用于检查tag函数。

 


node 'default' {
tag('puppetclient')
class {'role::test': }
}

class role::test {
if tagged('puppetclient') {
notify { '  role::test  class was tagged.': }
}
}

puppet MCollective 插件安装

【puppet MCollective  插件安装】

MCollective  插件安装常见有两种安装方式:

  •   在线使用yum安装

添加puppet 官方的yum源,然后使用yum来安装,示例:

yum install mcollective-filemgr-agent mcollective-filemgr-client  mcollective-filemgr-common  mcollective-iptables-agent mcollective-iptables-client
  •   离线安装方式a.

要求某台机器安装有如下软件包:rpm-build git的工具,本机已经安装MCollective package 插件,使用这些工具将原来下载下来的源文件打包成rpm格式,再作分发安装。 Mcollectvie 插件过程安装如示例:

 #git clone https://github.com/cegeka/mcollective-shell-agent #克隆git仓库
cd mcollective-shell-agent
[root@hotpu mcollective-shell-agent]# mco plugin package .
[root@hotpu mcollective-shell-agent]# ls -l *.rpm
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-common-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-client-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]
[root@hotpu mcollective-shell-agent]# rpm -ivh mcollective-shell-command-agent-1.0-1.el6.noarch.rpm
Preparing... ########################################### [100%]
1:mcollective-shell-comma########################################### [100%]

  •   离线安装方式b

将下载目录的对应文件复制到:/usr/libexec/mcollective/mcollective/ 对应目录下即可。

安装完插件后,需要重新启动mcollective 服务。

puppet ActiveMQ MCollective CentOS6 安装笔记

Puppet 在CentOS6 上安装activemq mcollective 笔记

安装activemq mcollective 环境

rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
yum -y install java-1.8.0-openjdk activemq

编辑activemq的配置文件:/etc/activemq/activemq.xml

cat /etc/activemq/activemq.xml|grep -v "^$"
<beans
 xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
 http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
 <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
 <property name="locations">
 <value>file:${activemq.conf}/credentials.properties</value>
 </property>
 </bean>
 <broker xmlns="http://activemq.apache.org/schema/core" brokerName="computer001" dataDirectory="${activemq.data}" schedulePeriodForDestinationPurge="60000">
 <destinationPolicy>
 <policyMap>
 <policyEntries>
 <policyEntry topic=">" >
 <pendingMessageLimitStrategy>
 <constantPendingMessageLimitStrategy limit="1000"/>
 </pendingMessageLimitStrategy>
 </policyEntry>
 </policyEntries>
 </policyMap>
 </destinationPolicy>
 <managementContext>
 <managementContext createConnector="false"/>
 </managementContext>
 <persistenceAdapter>
 <kahaDB directory="${activemq.data}/kahadb"/>
 </persistenceAdapter>
 <systemUsage>
 <systemUsage>
 <memoryUsage>
 <memoryUsage percentOfJvmHeap="70" />
 </memoryUsage>
 <storeUsage>
 <storeUsage limit="100 gb"/>
 </storeUsage>
 <tempUsage>
 <tempUsage limit="50 gb"/>
 </tempUsage>
 </systemUsage>
 </systemUsage>
<plugins>
 <simpleAuthenticationPlugin>
 <users>
 <authenticationUser username="client" password="client_password" groups="servers,clients,everyone"/>
 <authenticationUser username="server" password="server_password" groups="servers,everyone"/>
 </users>
 </simpleAuthenticationPlugin>
 
 <authorizationPlugin>
 <map>
 <authorizationMap>
 <authorizationEntries>
 <authorizationEntry queue="mcollective.>" write="clients" read="clients" admin="clients" />
 <authorizationEntry topic="mcollective.>" write="clients" read="clients" admin="clients" />
 <authorizationEntry queue="mcollective.nodes" read="servers" admin="servers" />
 <authorizationEntry queue="mcollective.reply.>" write="servers" admin="servers" />
 <authorizationEntry topic="mcollective.*.agent" read="servers" admin="servers" />
 <authorizationEntry topic="mcollective.registration.agent" write="servers" read="servers" admin="servers" />
 <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
 </authorizationEntries>
 </authorizationMap>
 </map>
 </authorizationPlugin>
</plugins>
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/> 
 </managementContext>
 <transportConnectors>
 <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
 </transportConnectors>
 <shutdownHooks>
 <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
 </shutdownHooks>
 </broker>
 <import resource="jetty.xml"/>
</beans>

配置文件,请参考:https://github.com/jorhett/learning-mcollective/blob/master/examples/activemq_59.xml

 

配置actvemq web控制台


cat /etc/activemq/jetty.xml |grep authenticate
<property name="authenticate" value="true" /> 将false 改为true,即可

启动activemq 服务

/etc/init.d/activemq start 

MC服务端安装 


yum install mcollective -y

MC服务端配置文件 


cat /etc/mcollective/server.cfg

daemonize = 1
direct_addressing = 1
# ActiveMQ connector settings:
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = 192.168.200.51 #填写activemq的IP地址或者dns,使用dns需要解析
plugin.activemq.pool.1.port = 61613 &amp;nbsp; #端口
plugin.activemq.pool.1.user = server # 用户名
plugin.activemq.pool.1.password = server_password #密码
plugin.activemq.heartbeat_interval = 30 &amp;nbsp;#心跳间隔
 
# How often to send registration messages
registerinterval = 600
 
# Plugins
securityprovider = psk
plugin.psk = psk_password
 
#
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log
loglevel = info


 

启动MC server服务

/etc/init.d/mcollective  start

注:每台MC 服务端都需要安装,并确保每台机器时间都是同步。

MC客户端安装

yum -y install mcollective-client

注:仅客户端需要安装。

MC客户端配置

cat /etc/mcollective/client.cfg

daemonize = 1
direct_addressing = 1
# ActiveMQ connector settings:
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = 192.168.200.51
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = client
plugin.activemq.pool.1.password = client_password
plugin.activemq.heartbeat_interval = 30
 
# How often to send registration messages
registerinterval = 600
 
# Plugins
securityprovider = psk
plugin.psk = psk_password
 
#
libdir = /usr/libexec/mcollective
logfile = /var/log/mcollective.log

ttl = 60
color = 1
rpclimitmethod = first

# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml


MC客户端测试

 # mco ping

computer001 time=70.62 ms

computer002 time=30.12 ms

computer003 time=30.02 ms

此blog都是以前个人笔记,不保证时效性,更多请继续关注puppetfans文章!

puppet 变量作用域

上篇介绍了puppet 变量归类,本篇介绍下puppet 变量作用域。

根据puppet变量作用域可以分为全局变量,节点变量,类变量,以及子类变量

  • 全局变量: 在site.pp里定义的变量。
     cat /etc/puppet/manifests/site.pp    $role = default 
  • 节点变量: 在node里定义的变量。
     node 'www.puppetfans.com' {  $dbname =  puppetfans } 
  • 类变量:    在class中定义的变量。
    class "base" { $username = puppetfans }
  • 子类变量
     class "base01"  { name = $::base::username }

 

CentOS 上安装puppet 教程

安装puppet主要方式,有两种通过包管理器,以及源码安装,源码安装puppet不推荐,在CentOS上推荐通过yum来安装。

安装puppet之前确保服务器能连网,添加puppet 官方yum源,地址为https://yum.puppetlabs.com/。

CentOS 6 安装puppet 3.x 示例

sudo rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-el- 6.noarch.rpm
sudo yum -y install puppet-server 
sudo yum -y install puppet

默认配置文件都在/etc/puppet目录下:

CentOS 6 安装puppet 4.x 示例

sudo rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-6.noarch.rpm
sudo yum -y install puppetserver 
sudo yum -y install puppet 

puppet 4.x  所有bin文件都保存在/opt/puppetlabs/bin目录下,需要注意 添加可执行文件路径到$PATH变量,不然会报找不到puppet命令。配置文件保存在/etc/puppetlabs/puppet/puppet.conf

 

puppet 配置文件示例:

vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code

Puppet 默认使用JVM内存大小2G,如需调整,步骤如下:

编辑 /etc/sysconfig/puppetserver文件

修改如下参数值:

JAVA_ARGS="-Xms2g -Xmx2g"

更多JVM参数调整,可参阅Orace官方文档

 

日志路径:/var/log/puppet

查看所有配置参数

puppet config print all