Welcome to my blog.

2019-03-24
使用非root用户操作Docker

1、使用普通用户登录报错

1
2
[dev@kube-node1 ~]$ docker ps -a
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.37/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied

2、查阅官方资料 https://docs.docker.com/install/linux/linux-postinstall/ 得到以下:

1
2
3
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.

3、解决方案:

3.1、创建docker组
1
# groupadd docker
3.2、将所需用户加入docker组
1
# gpasswd -a ${USER} docker
3.3、重启docker
1
# systemctl restart docker
3.4、验证
1
2
3
4
5
[root@kube-node1 /home/yunwei]# su - dev
Last login: Wed Feb 20 14:08:20 CST 2019 on pts/0
[dev@kube-node1 ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e8404a144626 f57c75cd7b0a "/heapster --source=…" 11 days ago Exited (137) 11 days ago k8s_heapster_heapster-9cc69ddcf-qlww2_kube-system_f8345be8-1d5e-11e9-acd8-005056b22233_11

参考资料:

Read More

2019-03-24
如何进入Docker容器和Pod?

两种方案:
进入docker容器 :
1
docker exec -ti  <your-container-name>   /bin/sh
进入pod:
1
kubectl exec -ti <your-pod-name>  -n <your-namespace>  -- /bin/sh

附:

1
2
3
4
5
6
7
8
# 查看日志
kubectl logs -f deploy/test-6549dd8c94-w8kb9 -n huidu

# 进入容器
kubectl exec -it -n huidu test-6549dd8c94-w8kb9 -- /bin/bash

# 重启pod
kubectl delete po -n huidu test-6549dd8c94-w8kb9

Read More

2019-03-24
如何解决The connection to the server localhost8080 was refused?

查看集群信息报错

1
2
3
4
[root@gzzsg-test-k8smaster03 ~]# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?

原因分析:

  • 1、apiserver启动时候端没有开启8080
  • 2、配置文件设置为–insecure-port=0

解决方案:

  • 1、开启8080端口 –insecure-port=8080
  • 2、指定~/.kube/config
    复制kubernetes 部署 dashboard 插件中创建使用 token 的 KubeConfig 文件内容到~/.kube/config即可
    1
    2
    3
    4
    5
    [root@gzzsg-test-k8smaster01 cfg]# kubectl  cluster-info
    Kubernetes master is running at https://k8s-api-test.xxx.com:8443
    kubernetes-dashboard is running at https://k8s-api-test.xxx.com:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Read More

2019-03-03
python cStringIO模块( 附生成excel并发送邮件实例)

在计算机中,IO是Input/Output的简写,也就是输入和输出。
IO编程中,Stream(流)是一个很重要的概念,可以把流想象成一个水管,数据就是水管里的水,但是只能单向流动。Input Stream就是数据从外面(磁盘、网络)流进内存,Output Stream就是数据从内存流到外面去。对于浏览网页来说,浏览器和服务器之间至少需要建立两根水管,才可以既能发数据,又能收数据。
cStringIO类似于StringIO(用法参见Python:StringIO模块),很多时候我们可以内存中读写str,并非一定要写先文件到磁盘。cStringIO其优势在于cStringIO是由C语言写成,运行速度较StringIO快。如果需要大量使用StringIO,就可以考虑使用cStringIO替代。官方文档:https://docs.python.org/2/library/stringio.html

使用cStringIO时有几点需要非常注意:
  • cStringIO.StringIO([s])是工厂函数,不能自行对其进行扩展。
  • **不能使用不能被转码为ASCII的Unicode编码格式的字符串。
  • 带字符串参数创建的内存文件,如:
    1
    2
    3
    import cStringIO 
    s='abcd'
    a=cStringIO.StringIO(s)

则a为只读文件,没有write()函数。
若不带参数,则同时有read()函数和write()函数。

Example usage:
1
2
3
4
5
6
7
8
9
10
11
12
13
import cStringIO

output = cStringIO.StringIO()
output.write('First line.\n')
print >>output, 'Second line.'

# Retrieve file contents -- this will be
# 'First line.\nSecond line.\n'
contents = output.getvalue()

# Close object and discard memory buffer --
# .getvalue() will now raise an exception.
output.close()
实操例子:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import xlsxwriter
try:
import cStringIO as StringIO
except ImportError:
import StringIO

# 创建内存文件
xls = StringIO.StringIO()
try:
# 创建文件
workbook = xlsxwriter.Workbook(xls)
# 创建工作薄
worksheet = workbook.add_worksheet()
# 写入标题(第一行)
i = 0
title = ["start_timestamp", "db_instance_id", "lock_time", "query_time", "return_row_count", "parse_row_count", "db_name", "sql", "user", "host"]
worksheet.write_row(0, i, title)
j = 1
for i in res:
worksheet.write(j, 0, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(i[0])))
worksheet.write(j, 1, i[1])
worksheet.write(j, 2, i[2])
worksheet.write(j, 3, i[3])
worksheet.write(j, 4, i[4])
worksheet.write(j, 5, i[5])
worksheet.write(j, 6, i[6])
worksheet.write(j, 7, i[7])
worksheet.write(j, 8, i[8])
worksheet.write(j, 9, i[9])
j+=1
workbook.close()
except Exception as e:
print(str(e))

# 将内存文件偏移量改为开始,便于邮件中读出所有内容,并发出
xls.seek(0)

##发送邮件时,从内存读取
attach = MIMEText(xls.read(), "base64", "gb2312")

# 清除内存文件
xls.close()
Read More

2019-03-03
pyevn+pipenv初始化你需要的python环境

一、引言

现在很多运维小伙伴都基于go或者python语言来编写些小工具或者平台。同时不少小伙伴都存在多个版本的python环境,特别是以前基于python2.7开发,现在基于python3开发。今天我们来简单介绍两个工具来初始化你需要的python环境

二、PYENV

PYENV可以干嘛?

  • 让您基于每个用户更改全局Python版本。
  • 为每个项目的Python版本提供支持。
  • 允许您使用环境变量覆盖Python版本。
  • 一次从多个版本的Python中搜索命令。

PYENV安装

1、安装依赖
1
yum install gcc make patch gdbm-devel openssl-devel sqlite-devel zlib-devel bzip2-devel readline-devel
2、安装
1
2
3
4
5
curl -L https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash

export PATH="/root/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

pyevn用法可参考

具体可参考:https://github.com/pyenv/pyenv

三、pipenv是啥?

pipenv这个管理工具,是 Kennethreitz 大神的作品,requests模块作者,主要解决使用同一模块多个版本的问题

PIPENV安装
1
pip install pipenv
pipenv 用法可参考


更详细用法可参考:https://github.com/pypa/pipenv

如何结合pycharm使用


附加:
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
解决方案:
sudo yum install mysql-devel
sudo yum install python-devel
sudo pipenv install mysql-python

Read More

2019-02-18
Ansible模块中核心类(附执行剧本实例代码)

一、引言

现阶段不少小伙伴都有自研CMDB平台,不过也有不少朋友询问如何定时获取系统硬件数据更新到数据库?对此,博主推荐使用自动化运维工具,如salt、ansible等,并不是自行写脚本就不可以。如做到标准化、规范化、机器规模达到一定量,还是比较推荐使用自动化运维工具。

二、简单看以下执行shell模块中ls命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
#!/usr/bin/env python
'''
引用模块
'''
import json
import shutil
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
import ansible.constants as C

class ResultCallback(CallbackBase):
"""A sample callback plugin used for performing an action as results come in

If you want to collect all results into a single object for processing at
the end of the execution, look into utilizing the ``json`` callback plugin
or writing your own custom callback plugin
"""
def v2_runner_on_ok(self, result, **kwargs):
"""Print a json representation of the result

This method could store the result in an instance attribute for retrieval later
"""
host = result._host
print(json.dumps({host.name: result._result}, indent=4))

# since API is constructed for CLI it expects certain options to always be set, named tuple 'fakes' the args parsing options object
Options = namedtuple('Options', ['connection', 'module_path', 'forks', 'become', 'become_method', 'become_user', 'check', 'diff'])
options = Options(connection='local', module_path=['/to/mymodules'], forks=10, become=None, become_method=None, become_user=None, check=False, diff=False)

# initialize needed objects
loader = DataLoader() # Takes care of finding and reading yaml, json and ini files
passwords = dict(vault_pass='secret')

# Instantiate our ResultCallback for handling results as they come in. Ansible expects this to be one of its main display outlets
results_callback = ResultCallback()

# create inventory, use path to host config file as source or hosts in a comma separated string
inventory = InventoryManager(loader=loader, sources='localhost,')

# variable manager takes care of merging all the different sources to give you a unifed view of variables available in each context
variable_manager = VariableManager(loader=loader, inventory=inventory)

# create datastructure that represents our play, including tasks, this is basically what our YAML loader does internally.
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(action=dict(module='shell', args='ls'), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
)

# Create play object, playbook objects use .load instead of init or new methods,
# this will also automatically create the task objects from the info provided in play_source
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)

# Run it - instantiate task queue manager, which takes care of forking and setting up all objects to iterate over host list and tasks
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback=results_callback, # Use our custom callback instead of the ``default`` callback plugin, which prints to stdout
)
result = tqm.run(play) # most interesting data for a play is actually sent to the callback's methods
finally:
# we always need to cleanup child procs and the structres we use to communicate with them
if tqm is not None:
tqm.cleanup()

# Remove ansible tmpdir
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)

运行结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@localhost python]# python ansible_api.py 
{
"localhost": {
"_ansible_parsed": true,
"stderr_lines": [],
"changed": true,
"end": "2018-05-29 03:44:27.292832",
"_ansible_no_log": false,
"stdout": "ansible_api.py\nsaomiao.py",
"cmd": "ls",
"start": "2018-05-29 03:44:27.262984",
"delta": "0:00:00.029848",
"stderr": "",
"rc": 0,
"invocation": {
"module_args": {
"creates": null,
"executable": null,
"_uses_shell": true,
"_raw_params": "ls",
"removes": null,
"warn": true,
"chdir": null,
"stdin": null
}
},
"stdout_lines": [
"ansible_api.py",
"saomiao.py"
]
}
}
{
"localhost": {
"msg": "ansible_api.py\nsaomiao.py",
"changed": false,
"_ansible_verbose_always": true,
"_ansible_no_log": false
}
}

通过以上可举例,当获取硬件如cpu、硬盘、内存等信息时候,可以使用setup模块获取,过滤即可入库保存。

三、Ansible模块中核心类

1
2
3
4
5
6
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
核心类 用途 路径
DataLoader 读取yam、json格式文件 ansible.parsing.dataloader
VariableManager 存储变量信息 ansible.inventory.manager
InventoryManager 用于导入Invetory文件 ansible.inventory.manager
Play 存储hosts角色信息 ansible.playbook.play
TaskQueueManager 任务队列 ansible.executor.task_queue_manager
CallbackBase 状态回调 ansible.plugins.callback
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
In [2]: from ansible.vars.manager import VariableManager

In [3]: from ansible.inventory.manager import InventoryManager

In [4]: from ansible.playbook.play import Play

In [5]: from ansible.executor.playbook_executor import PlaybookExecutor

In [6]: from ansible.executor.task_queue_manager import TaskQueueManager

In [7]: from ansible.plugins.callback import CallbackBase

In [8]: loader = DataLoader() ##实例对象

In [9]: inventory = InventoryManager(loader=loader, sources=['/etc/ansible/hosts']) ##调用InventoryManager返回实例对象
执行dir查看函数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
In [10]: dir(inventory)
Out[10]:
['__class__',
'__delattr__',
'__dict__',
'__doc__',
'__format__',
'__getattribute__',
'__hash__',
'__init__',
'__module__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'_apply_subscript',
'_enumerate_matches',
'_evaluate_patterns',
'_hosts_patterns_cache',
'_inventory',
'_inventory_plugins',
'_loader',
'_match_list',
'_match_one_pattern',
'_pattern_cache',
'_restriction',
'_setup_inventory_plugins',
'_sources',
'_split_subscript',
'_subset',
'add_group',
'add_host',
'clear_caches',
'clear_pattern_cache',
'get_groups_dict',
'get_host',
'get_hosts',
'get_vars',
'groups',
'hosts',
'list_groups',
'list_hosts',
'localhost',
'parse_source',
'parse_sources',
'reconcile_inventory',
'refresh_inventory',
'remove_restriction',
'restrict_to_hosts',
'subset']
/etc/ansible/hosts
1
2
3
4
5
6
7
8
9
10
11
[root@localhost ~]# tail /etc/ansible/hosts 
# leading 0s:

## db-[99:101]-node.example.com

[VM_129]
192.168.72.129
192.168.72.130
[VM_129:vars]
ansible_ssh_port=22
nginx_version=1.13.1
获取主机group
1
2
In [30]: print inventory.get_groups_dict()
{'ungrouped': [], 'all': [u'192.168.72.129', u'192.168.72.130'], u'VM_129': [u'192.168.72.129', u'192.168.72.130']}
获取主机host
1
2
In [31]: inventory.hosts
Out[31]: {u'192.168.72.129': 192.168.72.129, u'192.168.72.130': 192.168.72.130}
variable_manager
1
2
3
4
5
6
7
8
9
In [32]: variable_manager = VariableManager(loader=loader, inventory=inventory)


###查看相关模块
In [33]: variable_manager.
variable_manager.clear_facts variable_manager.set_host_facts
variable_manager.extra_vars variable_manager.set_host_variable
variable_manager.get_vars variable_manager.set_inventory
variable_manager.options_vars variable_manager.set_nonpersistent_facts

重点说三个variable_manager模块

  • 1、查看变量variable_manager.get_vars
  • 2、扩展变量variable_manager.extra_vars
  • 3、设置主机变量variable_manager.set_host_variable
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
In [37]: host=inventory.get_host(hostname='192.168.72.129')

In [38]: variable_manager.get_vars(host=host)
Out[38]:
{'ansible_playbook_python': '/usr/bin/python2',
u'ansible_ssh_port': 22,
'group_names': [u'VM_129'],
'groups': {u'VM_129': [u'192.168.72.129', u'192.168.72.130'],
'all': [u'192.168.72.129', u'192.168.72.130'],
'ungrouped': []},
'inventory_dir': u'/etc/ansible',
'inventory_file': u'/etc/ansible/hosts',
'inventory_hostname': u'192.168.72.129',
'inventory_hostname_short': u'192',
u'nginx_version': u'1.13.1',
'omit': '__omit_place_holder__91ac78f3a75e01eaa7d20ff0db9463023de382b8',
'playbook_dir': '/root'}

四、如何执行剧本?

上面如何介绍了执行命令,下面顺便提供执行剧本的代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
# coding:utf-8
# python 2.7.5
# ansible 2.4.2.0

import os, sys
import json
import shutil
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
from ansible.errors import AnsibleParserError
import ansible.constants as C



class Ad_hocResultsCollector(CallbackBase):

def __init__(self, *args, **kwargs):
super(Ad_hocResultsCollector, self).__init__(*args, **kwargs)
self.host_ok = {}
self.host_unreachable = {}
self.host_failed = {}

def v2_runner_on_unreachable(self, result):
self.host_unreachable[result._host.get_name()] = result

def v2_runner_on_ok(self, result, *args, **kwargs):
self.host_ok[result._host.get_name()] = result

def v2_runner_on_failed(self, result, *args, **kwargs):
self.host_failed[result._host.get_name()] = result


def getAd_hocResult(self):
# 获取结果的回调函数
results_info = {'success': {}, 'failed': {}, 'unreachable': {}}
for host, result in self.host_ok.items():
results_info['success'][host] = result._result
for host, result in self.host_failed.items():
results_info['failed'][host] = result._result
for host, result in self.host_unreachable.items():
results_info['unreachable'][host] = result._result

return results_info


class PlayBookResultsCollector(CallbackBase):
CALLBACK_VERSION = 2.0

def __init__(self, *args, **kwargs):
super(PlayBookResultsCollector, self).__init__(*args, **kwargs)
self.task_ok = {}
self.task_skipped = {}
self.task_failed = {}
self.task_status = {}
self.task_unreachable = {}
self.status_no_hosts = False

def v2_runner_on_ok(self, result, *args, **kwargs):
self.task_ok[result._host.get_name()] = result

def v2_runner_on_failed(self, result, *args, **kwargs):
self.task_failed[result._host.get_name()] = result

def v2_runner_on_unreachable(self, result):
self.task_unreachable[result._host.get_name()] = result

def v2_runner_on_skipped(self, result):
self.task_skipped[result._host.get_name()] = result

def v2_playbook_on_no_hosts_matched(self):
self.status_no_hosts = True

def v2_playbook_on_stats(self, stats):

hosts = sorted(stats.processed.keys())
for h in hosts:
t = stats.summarize(h)
self.task_status[h] = {
"success": t['ok'],
"changed": t['changed'],
"unreachable": t['unreachable'],
"skipped": t['skipped'],
"failed": t['failures']
}

def getPlaybookResult(self):
if self.status_no_hosts:
results = {'msg': "Could not match supplied host pattern", 'flag': False, 'executed': False}
return results
results_info = {'skipped': {}, 'failed': {}, 'success': {}, "status": {}, 'unreachable': {}, "changed": {}}
for host, result in self.task_ok.items():
results_info['success'][host] = result._result
for host, result in self.task_failed.items():
results_info['failed'][host] = result
for host, result in self.task_status.items():
results_info['status'][host] = result
for host, result in self.task_skipped.items():
results_info['skipped'][host] = result
for host, result in self.task_unreachable.items():
results_info['unreachable'][host] = result
return results_info

class AnsRunner(object):
def __init__(self):
self.resource = None
self.inventory = None
self.variable_manager = None
self.loader = None
self.options = None
self.passwords = None
self.__initializeData()

def __initializeData(self):

# 初始化需要的对象
Options = namedtuple('Options', ['connection', 'module_path', 'forks', 'timeout', 'remote_user',
'ask_pass', 'private_key_file', 'ssh_common_args', 'ssh_extra_args',
'sftp_extra_args',
'scp_extra_args', 'become', 'become_method', 'become_user', 'ask_value_pass',
'verbosity',
'check', 'listhosts', 'listtasks', 'listtags', 'syntax', 'diff'])

self.options = Options(connection='ssh', module_path=None, forks=100, timeout=10,
remote_user='root', ask_pass=False, private_key_file=None, ssh_common_args=None,
ssh_extra_args=None,
sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None,
become_user='root', ask_value_pass=False, verbosity=None, check=False, listhosts=False,
listtasks=False, listtags=False, syntax=False, diff=False)

self.loader = DataLoader() # Takes care of finding and reading yaml, json and ini files
self.passwords = dict(vault_pass='secret')
self.results_callback = Ad_hocResultsCollector()

# create inventory, use path to host config file as source or hosts in a comma separated string e.g. sources=['10.1.30.193',]
self.inventory = InventoryManager(loader=self.loader, sources=['/etc/ansible/hosts'])
# variable manager takes care of merging all the different sources to give you a unifed view of variables available in each context
self.variable_manager = VariableManager(loader=self.loader, inventory=self.inventory)

def run_ad_hoc(self, host_list=None, module_name=None, module_args=None):
self.results_callback = Ad_hocResultsCollector()
# create datastructure that represents our play, including tasks, this is basically what our YAML loader does internally.
play_source = dict(
name="Ansible Play",
hosts=host_list,
gather_facts='no',
tasks=[
dict(action=dict(module=module_name, args=module_args), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
)

play = Play().load(play_source, variable_manager=self.variable_manager, loader=self.loader)
tqm = None
try:
tqm = TaskQueueManager(
inventory=self.inventory,
variable_manager=self.variable_manager,
loader=self.loader,
options=self.options,
passwords=self.passwords,
stdout_callback=self.results_callback,
# Use our custom callback instead of the ``default`` callback plugin, which prints to stdout
)
result = tqm.run(play) # most interesting data for a play is actually sent to the callback's methods
return self.results_callback.getAd_hocResult()
finally:
# we always need to cleanup child procs and the structres we use to communicate with them
if tqm is not None:
tqm.cleanup()

# Remove ansible tmpdir
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)

def get_cmdb_info(self, host_list):
self.results_callback = Ad_hocResultsCollector()
play_source = dict(
name="Ansible setup Play",
hosts=host_list,
gather_facts='no',
tasks=[
dict(action=dict(module='setup'), register='shell_out'),

]
)

play = Play().load(play_source, variable_manager=self.variable_manager, loader=self.loader)
tqm = None
try:
tqm = TaskQueueManager(
inventory=self.inventory,
variable_manager=self.variable_manager,
loader=self.loader,
options=self.options,
passwords=self.passwords,
stdout_callback=self.results_callback,
)
result = tqm.run(play)
return self.results_callback.getAd_hocResult()
finally:
if tqm is not None:
tqm.cleanup()
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)

def run_playbook(self, playbook_path, host_list=None, extra_vars=None):

for i in playbook_path:
if not os.path.exists(i):
print
'[INFO] The [%s] playbook does not exist' % i
sys.exit()

self.variable_manager.extra_vars = extra_vars
passwords = None
self.results_callback = PlayBookResultsCollector()
try:
playbook = PlaybookExecutor(playbooks=playbook_path, inventory=self.inventory,
variable_manager=self.variable_manager,
loader=self.loader, options=self.options, passwords=passwords)
playbook._tqm._stdout_callback = self.results_callback
# 执行playbook
result = playbook.run()
return self.results_callback.getPlaybookResult()
except AnsibleParserError:
code = 1001
results = {'playbook': playbook_path, 'msg': playbook_path + ' playbook have syntax error', 'flag': False}
return code, results
# except Exception as e:
# return False



if __name__ == '__main__':
ANS = AnsRunner()
ret=ANS.run_ad_hoc(host_list='*', module_name='shell', module_args='ls')
#ret=ANS.get_cmdb_info(host_list='*')
#print(ret)
print(ANS.run_playbook(playbook_path=['/data/Playbook/install_jdk18.yaml'],
extra_vars={"remote_server": "*"}))

Read More

2019-02-18
Docker如何推送镜像到Harbor?

登录

1
2
3
[root@node1 harbor]# docker login -u admin -p Harbor12345  http://10.80.80.251
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://10.80.80.251/v2/: read tcp 10.80.80.251:50374->10.80.80.251:443: read: connection reset by peer
原因如下:

docker 默认使用https(生产建议走生产带证书域名),如果仓库使用了http,则要修改下Docker的配置:/etc/docker/daemon.json, 添加参数insecure-registries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node1 harbor]#  cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://v5d7kh0f.mirror.aliyuncs.com"],
"insecure-registries": ["10.80.80.251"]
}

# 重启docker
[root@node1 harbor]# systemctl restart docker



# 重启后登录
[root@node1 harbor]# docker login -u admin -p Harbor12345 http://10.80.80.251
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

查看镜像列表

1
2
3
4
5
6
7
[root@node1 harbor]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

gcr.io/google_containers/cluster-proportional-autoscaler-amd64 1.3.0 33813c948942 3 months ago 45.8MB
registry.cn-hangzhou.aliyuncs.com/ringtail/cluster-proportional-autoscaler-amd64 v1.3.0 33813c948942 3 months ago 45.8MB
cr.io/google_containers/cluster-proportional-autoscaler-amd64 1.3.0 33813c948942 3 months ago 45.8MB
google_containers/cluster-proportional-autoscaler-amd64 1.3.0 33813c948942 3 months ago 45.8MB

例如需要把 registry.cn-hangzhou.aliyuncs.com/google_containers/cluster-proportional-autoscaler-amd64 上传到horbar仓库,则步操如下:

1、打tag

1
docker tag registry.cn-hangzhou.aliyuncs.com/ringtail/cluster-proportional-autoscaler-amd64:v1.3.0 10.80.80.251/google_containers/cluster-proportional-autoscaler-amd64:1.3.0

2、登录

看开头上文

3、PUSH(需要提前在Horbar新增项目google_containers)

1
2
3
4
[root@node1 harbor]# docker push   10.80.80.251/google_containers/cluster-proportional-autoscaler-amd64:1.3.0
The push refers to repository [10.80.80.251/google_containers/cluster-proportional-autoscaler-amd64]
a636ea940e54: Pushed
1.3.0: digest: sha256:4fd37c5b29a38b02c408c56254bd1a3a76f3e236610bc7a8382500bbf9ecfc76 size: 528
Read More

2019-02-18
如何搭建高可用Docker Harbor仓库

一、首先探究Docker仓库分类?

  • 1、公有仓库(docker hub、阿里云仓库、网易云仓库等)
  • 2、私有仓库(harbor、docker-registry)

Docker-registry是官方提供的工具,可以用于构建私有的镜像仓库。Harbor是由VMware团队负责开发的开源企业级Docker Registry,

二、What is Harbor?

Harbor is an open source cloud native registry that stores, signs, and scans container images for vulnerabilities.

Harbor solves common challenges by delivering trust, compliance, performance, and interoperability. It fills a gap for organizations and applications that cannot use a public or cloud-based registry, or want a consistent experience across clouds.

三、首先介绍下Horbar组件

如上图所示,Harbor包含6个组件:

  • Proxy: Harbor的组件,例如注册表,UI和令牌服务,都在反向代理之后。代理将来自浏览器和Docker客户端的请求转发给各种后端服务。

  • Registry: 负责存储Docker镜像和处理Docker pull/push命令。由于Harbor需要对图像强制实施访问控制,因此Registry会将客户端定向到token,以获取每个pull/push有效的token。

  • Core services: horbar的核心功能,主要提供以下服务:

    • UI: 一个图形用户界面,用于帮助用户管理Registry;
    • Webhook: 是一种在Registry中配置的机制,因此可以将Registry中的图像状态更改填充到Harbor的Webhook端点。Harbor使用webhook更新日志,启动复制和其他一些功能;
    • Token:负责根据用户的项目角色为每个docker push / pull命令发出令牌。如果从Docker客户端发送的请求中没有令牌,则注册表会将请求重定向到令牌服务;
  • Database: 数据库存储项目,用户,角色,复制策略和图像的元数据。

  • Job services: 用于图像复制,本地图像可以复制(同步)到其他Harbor实例。

  • Log collector: 负责收集其他模块的日志。

四、Docker登陆过程

  • (a)通过端口80监听的代理容器接收该请求。容器中的Nginx将请求转发给后端的Registry容器。

  • (b)注册表容器已配置为基于token的身份验证,如果验证失败将会返回错误代码401,通知Docker客户端从指定的URL获取有效token。在Harbor中,此URL指向Core Services的token服务;

  • (c)当Docker客户端收到错误代码时,它会向token服务URL发送请求,根据HTTP规范的基本身份验证在请求头中带上用户名和密码;

  • (d)在通过端口80将该请求发送到代理容器之后,Nginx再次根据预先配置的规则将请求转发到UI容器。UI容器内的token服务接收请求,解码请求并获取用户名和密码;

  • (e)获取用户名和密码后,token服务检查数据库并通过MySql数据库中的数据对用户进行身份验证。为LDAP / AD身份验证配置token服务时,它将向外部LDAP / AD服务器发起请求进行身份验证。身份验证成功后,token服务将返回指示成功的HTTP代码。HTTP响应主体包含由私钥生成的token。

五、Harbor高可用方案有哪些?

在日常维护中,仓库对于我们来说,可谓十分重要。当生产仓库当宕机后,而不能快速恢复时候,对开发和业务影响甚大,建议在日常运维中采取相对高可用方案,最起码保证数不会丢失。

Harbor高可用可包括以下方案:

  • 多实例共享后端存储(采取挂载文件系统方式)
  • 多实例相互数据同步(基于镜像复制模式)

六、Horbar安装与配置

6.1、安装方式

  • 在线下载安装: 安装程序从Docker hub下载Harbor的镜像
  • 离线脱机安装: 当主机不能上网时候可使用此安装程序

建议使用最新稳定版离线脱机安装,相关程序包下载地址为:https://github.com/goharbor/harbor/releases

基础环境

  • 系统版本: CentOS Linux release 7.4.1708 (Core)
  • Harbor版本: harbor-offline-installer-v1.7.1.tgz

注意:安装前需要安装Compose

6.2、在Linux系统上安装Compose

6.2.1 运行此命令以下载最新版本的Docker Compose:
1
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
6.2.2 对二进制文件应用可执行权限:
1
sudo chmod +x /usr/local/bin/docker-compose

注意:如果docker-compose安装后命令失败,请检查路径。您还可以创建/usr/bin路径中的符号链接或任何其他目录。

1
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

6.2.3 验证:
1
2
3
4
5
[root@k8s-node1 harbor]# docker-compose version 
docker-compose version 1.23.2, build 1110ad01
docker-py version: 3.6.0
CPython version: 3.6.7
OpenSSL version: OpenSSL 1.1.0f 25 May 2017

6.3 配置参数

horbar配置参数位于文件harbor.cfg中,在harbor.cfg中有两类参数,

  • 必需参数: 需要在配置文件中设置这些参数。如果用户更新它们harbor.cfg并运行install.sh脚本以重新安装Harbor,它们将生效。
  • 可选参数: 这些参数对于更新是可选的,即用户可以将它们保留为默认值,并在启动Harbour后在Web Portal上更新它们。如果它们已经启用harbor.cfg,它们只会在首次启动Harbour时生效。harbor.cfg将忽略对这些参数的后续更新。

重要的几个参数(如需要用到其它参数可以参考官方文档)

  • hostname: 目标主机的主机名,不建议使用localhost或127.0.0.1作为主机名
  • ui_url_protocol: 访问使用协议为http或https,默认为http
  • db_password: 用于db_auth的PostgreSQL数据库的root密码
  • max_job_workers: (默认值为10)作业服务中的最大复制工作数
  • ssl_cert: SSL证书的路径,仅在协议设置为https时应用
  • ssl_cert_key: SSL密钥的路径,仅在协议设置为https时应用
  • harbor_admin_password: 管理员的初始密码。此密码仅在Harbor首次启动时生效。之后,将忽略此设置,并且应在Portal中设置管理员密码。请注意,默认用户名/密码为admin / Harbor12345
  • auth_mode: 使用的身份验证类型。默认情况下,它是db_auth

6.4 安装harbor

6.4.1 推荐离线安装
下载程序包:https://github.com/goharbor/harbor/releases
1
wget https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-offline-installer-v1.7.1.tgz
建议使用https增加安全性

由于Harbor未附带任何证书,因此默认情况下使用HTTP来提供注册表请求。但是,强烈建议为任何生产环境启用安全性。Harbor有一个Nginx实例作为所有服务的反向代理,您可以使用prepare脚本配置Nginx以启用https。

在测试或开发环境中,您可以选择使用自签名证书,而不是来自受信任的第三方CA的证书。以下内容将向您展示如何创建自己的CA,并使用您的CA签署服务器证书和客户端证书。

获得证书授权
1
2
3
4
5
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=yourdomain.com" \
-key ca.key \
-out ca.crt
获得服务器证书

假设您的注册表的主机名是yourdomain.com,并且其DNS记录指向您正在运行Harbor的主机。在生产环境中,您首先应该从CA获得证书。在测试或开发环境中,您可以使用自己的CA. 证书通常包含.crt文件和.key文件,例如yourdomain.com.crt和yourdomain.com.key。

  • 1)创建自己的私钥:

    1
    openssl genrsa -out yourdomain.com.key 4096
  • 2)生成证书签名请求:

如果您使用像yourdomain.com这样的FQDN 连接注册表主机,那么您必须使用yourdomain.com作为CN(通用名称)。

1
2
3
4
openssl req -sha512 -new \
-subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=yourdomain.com" \
-key yourdomain.com.key \
-out yourdomain.com.csr

  • 3)生成注册表主机的证书:

无论您是使用类似yourdomain.com的 FQDN 还是IP来连接注册表主机,请运行此命令以生成符合主题备用名称(SAN)和x509 v3扩展要求的注册表主机证书:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=yourdomain.com
DNS.2=yourdomain
DNS.3=hostname
EOF

openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in yourdomain.com.csr \
-out yourdomain.com.crt
6.4.2 配置harbor

复制证书和密钥(建议使用外网域名购买的证书,docker pull/push 默认走https)

1
2
cp yourdomain.com.crt /data/cert/
cp yourdomain.com.key /data/cert/

修改配置文件

1
2
3
4
hostname = 192.168.0.6
ui_url_protocol = https
ssl_cert = /data/cert/yourdomain.com.crt
ssl_cert_key = /data/cert/yourdomain.com.key

为Harbor生成配置文件

1
./prepare

安装harbor

1
./install.sh

如果安装过程出现以下错误,请确认Docker Compose是否安装成功!

1
✖ Need to install docker-compose(1.7.1+) by yourself first and run this script again.

6.4.3 如外接pg配置,新版暂时不支持mysql(https://github.com/goharbor/harbor/issues/6534
harbor.cfg配置如下:
1
2
3
4
db_host = 192.168.32.18
db_password = password
db_port = 5432
db_user = harbor
安装和配置
1
2
3
4
5
6
7
8
9
10
yum install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-1.noarch.rpm
#安装客户端
yum install postgresql10
#安装服务端
yum install postgresql10-server

#初始化和启动
postgresql-setup initdb
systemctl enable postgresql.service
systemctl start postgresql.service

具体可参考:https://www.postgresql.org/download/linux/redhat/

创建用户和数据库
1
2
3
4
5
6
7
8
9
10
CREATE DATABASE registry;
CREATE USER harbor WITH PASSWORD 'password';
CREATE DATABASE registry OWNER harbor;
GRANT ALL PRIVILEGES ON DATABASE registry to harbor;

# 修改 postgresql.conf
listen_addresses = '*'

# 修改pg_hba.conf
host all all 0.0.0.0/0 md5
踩坑记录
踩坑之一:用户界面无法登陆

解决:
1、docker ps 发现harbor-adminserver 不断重启

1
bf880eb451cd        goharbor/harbor-adminserver:v1.7.1       "/harbor/start.sh"       48 seconds ago      Restarting (1) 17 secons ago                                                                         harbor-adminserver

2、查看日志发现以下

1
2
Jan 30 08:48:29 172.22.0.1 adminserver[64872]: 2019-01-30T00:48:29Z [INFO] the path of key used by key provider: /etc/adminserverkey
Jan 30 08:48:29 172.22.0.1 adminserver[64872]: 2019-01-30T00:48:29Z [FATAL] [main.go:45]: failed to initialize the system: read /tc/adminserver/key: is a directory

3、由于修改了secretkey_path in harbor.cfg,并没有更改docker-compose.yml配置导致,修改后docker-compose.yml执行以下命令即可:

1
2
3
docker-compose down
./prepare
docker-compose up -d

参考:

Read More

2019-02-17
Python调用Jenkins api 文档

1、引言:

现阶段大部分公司,在日常运维中使用jenkins频率还是相对多,今天我们简单探讨下jenkins api的使用。

2、Python库

2.1、python-jenkins

python-jenkins官方链接:
http://python-jenkins.readthedocs.io/en/latest/examples.html#example-1-get-version-of-jenkins

2.2、jenkinsapi

jenkinsapi 官方链接:
https://pypi.org/project/jenkinsapi/#description

3、python-jenkins库

3.1安装

1
2
3
pip install python-jenkins
# 或者
easy_install python-jenkins

3.2案例

3.2.1 获取版本号

1
2
3
4
5
import jenkins

server = jenkins.Jenkins('http://localhost:8080', username='admin', password='admin')

user = server.get_whoami()

version = server.get_version()

print('Hello %s from Jenkins %s' % (user['fullName'], version))

3.2.2 简单API

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# coding:utf-8
import jenkins

class Jenkins_Api(object):
def __init__(self):
self._url='http://192.168.72.128:8080'
self._username="admin"
self._password="admin"
def get_server_instance(self):
server=jenkins.Jenkins(self._url, username=self._username, password=self._password)
user=server.get_whoami()
return server

def get_version(self):
return self.get_server_instance().get_version()

def get_jobs(self):
return {
"jobs_count":self.get_server_instance().jobs_count(),
"get_jobs":self.get_server_instance().get_jobs()}

def create_job(self,job_name):
return self.get_server_instance().create_job(job_name)
def get_job_confing(self,job_name):
return self.get_server_instance().get_job_config(job_name)

def copy_job(self,job_name,new_job_name):
return self.get_server_instance().copy_job(job_name,new_job_name)

def build_job(self,job_name,parameters=None,):
return self.get_server_instance().build_job(job_name,parameters=parameters)

def delete_build(self,job_name,number):
return self.get_server_instance().delete_build(job_name,number)

def get_job_info(self,job_name):
return self.get_server_instance().get_job_info('test')['lastCompletedBuild']['number']

def get_build_info(self,job_name,number):
return self.get_server_instance().get_build_info(job_name, number)

def get_build_console_output(self, job_name, number):
return self.get_server_instance().get_build_console_output(job_name, number)

def create_view(self,view_name):
return self.get_server_instance().create_view(view_name,config_xml=jenkins.EMPTY_VIEW_CONFIG_XML)

def get_views(self):
return self.get_server_instance().get_views()

if __name__=='__main__':
print(Jenkins_Api().get_views())
# print(Jenkins_Api().get_build_console_output(job_name='test',number=7))
print(Jenkins_Api().build_job(job_name='test',parameters={"Branch":"origin/master"}))

4、jenkinsapi库

4.1安装

1
2
3
pip install python-jenkins
# 或者
easy_install python-jenkins

4.2案例

4.2.1获取版本号

1
2
3
4
import jenkinsapi
from jenkinsapi.jenkins import Jenkins
J=Jenkins('http://192.168.72.128:8080',username="admin",password="admin")
print(J.version)

4.2.2 获取Job详细输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Jenkins server
from jenkinsapi.jenkins import Jenkins

def get_server_instance():
jenkins_url = 'http://jenkins_host:8080'
server = Jenkins('http://192.168.72.128:8080',username="admin",password="admin")
return server
def get_job_details():
# Refer Example #1 for definition of function 'get_server_instance'
server = get_server_instance()
for job_name, job_instance in server.get_jobs():
print("-"*20)
print('Job Name:%s' % (job_instance.name))
print('Job Description:%s' % (job_instance.get_description()))
print('Is Job running:%s' % (job_instance.is_running()))
print('Is Job enabled:%s' % (job_instance.is_enabled()))

4.2.3 获取最新构建输出

1
2
3
4
5
6
7
8
9
10
11
12
import jenkinsapi
from jenkinsapi.jenkins import Jenkins

def get_server_instance():
jenkins_url = 'http://jenkins_host:8080'
server = Jenkins('http://192.168.72.128:8080',username="admin",password="admin")
return server

def get_last_console():
server = get_server_instance()
for i in server.keys():
print(server[i].get_last_completed_build().get_console())

4.2.4 简单API

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from jenkinsapi.jenkins import Jenkins
import json

class Jenkins_Api(object):
def __init__(self):
self._url='http://192.168.72.128:8080'
self._username="admin"
self._password="admin"
def get_server_instance(self):
server = Jenkins(self._url, username=self._username, password=self._password)
return server

def get_version(self):
return self.get_server_instance().version

def get_job_last_console(self,job_name):
data={
'time': str(self.get_server_instance()[job_name].get_last_completed_build().get_timestamp()),
'status':self.get_server_instance()[job_name].get_last_completed_build().get_status(),
'res':self.get_server_instance()[job_name].get_last_completed_build().get_console()
}
return data
#return json.dumps(data)

if __name__=='__main__':
print(Jenkins_Api().get_job_last_console(job_name='test'))
Read More

2019-02-17
centos7 docker 使用ss科学上网

安装privoxy客户端

1
yum install privoxy

配置privoxy

1
2
[root@node1 docker.service.d]# grep -v "^#\|^$" /etc/privoxy/config|grep forward-socks5t
forward-socks5t / 10.80.80.78:1080 .

Docker开启ss科学上网

在 /etc/systemd/system/docker.service.d目录下新增
1
2
3
4
[Service]
Environment=HTTP_PROXY=http://127.0.0.1:8118/
Environment=HTTPS_PROXY=http://127.0.0.1:8118/
Environment=NO_PROXY=localhost,10.80.80.251,m1empwb1.mirror.aliyuncs.com,docker.io,registry.cn-hangzhou.aliyuncs.com
1
2
3
4
5
6
[root@node1 docker.service.d]# systemctl daemon-reload

[root@node1 docker.service.d]# systemctl show docker |grep 127.0.0.1
Environment=GOTRACEBACK=crash DOCKER_DNS_OPTIONS=\x20\x20\x20\x20\x20--dns\x20192.168.5.21\x20--dns\x20192.168.5.22\x20--dns\x20223.5.5.5\x20\x20\x20\x20\x20\x20\x20--dns-search\x20default.svc.cluster.local\x20--dns-search\x20svc.cluster.local\x20\x20\x20\x20\x20\x20\x20--dns-opt\x20ndots:2\x20--dns-opt\x20timeout:2\x20--dns-opt\x20attempts:2\x20\x20\x20 DOCKER_OPTS=\x20\x20--data-root=/var/lib/docker\x20--log-opt\x20max-size=50m\x20--log-opt\x20max-file=5\x20--iptables=false HTTP_PROXY=http://127.0.0.1:8118/ HTTPS_PROXY=http://127.0.0.1:8118/ NO_PROXY=localhost,10.80.80.251,m1empwb1.mirror.aliyuncs.com,docker.io,registry.cn-hangzhou.aliyuncs.com

[root@node1 docker.service.d]# systemctl restart docker
验证
1
2
3
4
5
6
[root@node1 docker.service.d]# docker pull gcr.io/kubernetes-helm/tiller:v2.2.2
v2.2.2: Pulling from kubernetes-helm/tiller
53ebc9bfbcc0: Pull complete
8065d4c79ab9: Pull complete
Digest: sha256:82677f561f8dd67b6095fe7b9646e6913ee99e1d6fdf86705adbf99a69a7d744
Status: Downloaded newer image for gcr.io/kubernetes-helm/tiller:v2.2.2

如果验证失败报以下错误,注销登录即可

1
2
3
4
5
[root@node1 docker.service.d]# docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.3.0
Error response from daemon: Get https://gcr.io/v2/google_containers/cluster-proportional-autoscaler-amd64/manifests/1.3.0: unauthorized: Not Authorized.

[root@node1 docker.service.d]# docker logout gcr.io
Removing login credentials for gcr.io

Read More