百度paddlepaddle《青春有你2》Python⼩⽩逆袭⼤神技术打
卡学习总结
之前参加过百度飞桨深度学习CV疫情训练营7天打卡学习,从了解概念到简单的使⽤paddle做⼀些⼝罩监测类的识别任务,收获特别多,这次⼜参加了更加基础的⼩⽩训练营,感觉对深度学习、padddle,paddleHub的了解⼜进⼀步加深,早上晨会的时候还向领导推荐EasyDL,并结合我对公司业务了解谈了⾃⼰的建议和想法,以及我们部门未来的技术⽅向问题。领导很认同,并要了EasyDL的应⽤案例⽹址了解⼀下。所以真⼼感谢百度paddle训练营的⽼师团队和班主任,感谢百度提供了这么好的活动和平台,真良⼼。
很喜欢训练营的学习氛围和形式,在学习进⾏直播、答疑、互动等,在AIstudio进⾏练习实践,做作业遇到问题去⾥翻阅聊天记录,查查讨论区,⽹上搜索资料,这种环境让我感觉进步很快。我⾃⼰很喜欢⽤思维模型分析⼯作和⽣活中的⼀些事情,在这⾥分享⼀个圈外培训的学习⽅法把学到的知识转化为能⼒:
1.掌握20%的核⼼知识;
2.知识和问题相互靠拢
3.系统化训练。
最后进⾏⼀下技术问题总结:
Day1-⼈⼯智能概述与⼊门基础:
掌握Python的基础语法知识
基础知识很重要,如果不能掌握熟练,做作业时会遇到各种奇怪的问题,⽐如中⽂符号,缺少操作符,符号不对应,拼写错误,路径不到等。特别是判断条件循环的综合使⽤,⼏乎每个作业都会涉及。推荐去菜鸟教程⾥⾯Python练习页⾯重复练习直⾄熟练。
Day2-Python进阶
meizi_headers = [
"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14",
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0)",
'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
'Opera/9.25 (Windows NT 5.1; U; en)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12',
'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',
"Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7",
"Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0",
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'
]
# 给请求指定⼀个请求头来模拟chrome浏览器
global headers
headers = {'User-Agent': random.choice(meizi_headers)}
# 爬图地址
mziTu = 'www.win4000/mt/yushuxin_1.html'
'''
www.win4000/mt/yushuxin_1.html
'''
# 定义存储位置
global save_path
save_path = 'D:\pmp\zhao'
# 创建⽂件夹
def createFile(file_path):
if ists(file_path) is False:
os.makedirs(file_path)
# 切换路径⾄上⾯创建的⽂件夹
os.chdir(file_path)
# 下载⽂件
# def download():
# global headers
# #res_sub = (page_no, headers=headers)
# # 解析html
# #soup_sub = BeautifulSoup(, 'html.parser')
# # 获取页⾯的栏⽬地址
# #all_a = soup_sub.find('div',class_='postlist').find_all('a',target='_blank')
# #all_a =['www.win4000/mt/yushuxin_1.html', 'www.win4000/mt/yushuxin_2.html', 'www.win4000/mt/yushuxin_2.html']
# # all_a = ['www.kuyv/star/xujiaqi/photo/1/', 'www.kuyv/star/xujiaqi/photo/2/', 'www.kuyv/star/xujiaqi/photo/3/', 'www.kuy # all_a = ['tushuo.jk51/tushuo/6817012.html', 'tushuo.jk51/tushuo/6817012_p2.html', 'tushuo.jk51/tushuo/6817012_p3.html' # count = 0
# imgnum = 0
# for a in all_a:
# count = count + 1
# if (count ) :
# headers = {'User-Agent': random.choice(meizi_headers)}
# print("内页第⼏页:" + str(count))
# # 提取href
# #href = a.attrs['href']
# href = a
# print(222, a)
# print("套图地址:" + href)
# res_sub_1 = (href, headers=headers)
# soup_sub_1 = BeautifulSoup(res_, 'html.parser')
# #print(soup_sub_1)
# #break
# # ------ 这⾥最好使⽤异常处理 ------
# # ------ 这⾥最好使⽤异常处理 ------
# try:
# # 获取套图的最⼤数量
#
# imgs = soup_sub_1.find('div', class_='detail').find_all('img')
# #print(111, imgs)
#
# for img in imgs :
#
# if isinstance(img, bs4.element.Tag):
# imgnum +=1
# print(imgnum)
# # 提取src
# url = 'http:'+ img.attrs['src']
# array = url.split('/')
# file_name = str(imgnum)+'.jpg'
# # 防盗链加⼊Referer
# headers = {'User-Agent': random.choice(meizi_headers), 'Referer': url}
# img = (url, headers=headers)
# print('开始保存图⽚', img)
# f = open(file_name, 'ab')
# f.t)
# print(file_name, '图⽚保存成功!')
# f.close()
# except Exception as e:
# print(e)
# 下载⽂件
def download():
global headers
#res_sub = (page_no, headers=headers)
# 解析html
#soup_sub = BeautifulSoup(, 'html.parser')
# 获取页⾯的栏⽬地址
#all_a = soup_sub.find('div',class_='postlist').find_all('a',target='_blank')
#all_a =['www.win4000/mt/yushuxin_1.html', 'www.win4000/mt/yushuxin_2.html', 'www.win4000/mt/yushuxin_2.html'] headers = {'User-Agent': random.choice(meizi_headers)}
# 提取href
#href = a.attrs['href']
href = 'm.douban/movie/celebrity/1424975/all_photos'
res_sub_1 = (href, headers=headers)
soup_sub_1 = BeautifulSoup(res_, 'html.parser')
print(000, soup_sub_1)
# ------ 这⾥最好使⽤异常处理 ------
try:
# 获取套图的最⼤数量
imgnum = 0
imgs = soup_sub_1.find('ul', {"id": "photolist"}).find_all('a')
print(111, imgs)
for img in imgs :
if isinstance(img, bs4.element.Tag):
imgnum +=1
print(999, imgnum)
# 提取src
url = img.attrs['style'][22:-1]
青春有你2前九名print(url)
break
array = url.split('/')
file_name = str(imgnum)+'.jpg'
# 防盗链加⼊Referer
headers = {'User-Agent': random.choice(meizi_headers), 'Referer': url}
headers = {'User-Agent': random.choice(meizi_headers), 'Referer': url}
img = (url, headers=headers)
print('开始保存图⽚', img)
f = open(file_name, 'ab')
f.write(im
print(file_name, '图⽚保存成功!')
f.close()
except Exception as e:
print(e)
# 主⽅法
def main():
res = (mziTu, headers=headers)
# 使⽤⾃带的html.parser解析
soup = , 'html.parser')
# 创建⽂件夹
createFile(save_path)
download()
# # 获取⾸页总页数
# img_max = soup.find('div', class_='nav-links').find_all('a')[3].text
# # print("总页数:"+img_max)
# for i in range(1, int(img_max) + 1):
# # 获取每页的URL地址
# if i == 1:
# page = mziTu
# else:
# page = mziTu + 'page/' + str(i)
# file = save_path + '\\' + str(i)
# createFile(file)
# # 下载每页的图⽚
# print("套图页码:" + page)
# download(page, file)
if __name__ == '__main__':
main()
Day3-⼈⼯智能常⽤Python库
Python被⼤量应⽤在数据挖掘和深度学习领域,其中使⽤极其⼴泛的是Numpy、pandas、Matplotlib、PIL等库。
主要技术点是利⽤numpy,pandas对数据进⾏分析处理,使⽤Matplotlib,PIL进⾏图⽚绘制展⽰等
Day4-PaddleHub体验与应⽤
PaddlueHub让你像使⽤软件操作⼀样使⽤深度学习模型进⾏⼯作处理。
Day5-EasyDL体验与作业发布
1. 了解EasyDL的产品概念和应⽤场景
EasyDL 为企业及开发者提供了从完善安全的数据服务、⼤规模分布式模型训练、丰富灵活的模型部署到预测的⼀站式服务
2. 综合⼤作业(评论抓取,词频统计和可视化,内容审核
发布评论