寒假所发生的一切

android studio 相关说明 | 2019-02-09 02:25

关于寒假所发生的一切

当我还是一个小孩子的时候过年意味着雪花,红色,鞭炮,和对喜悦由衷的渴望,但过了一年又一年,年渐渐变得纯粹,简化到只剩下过年的本身。是我变了还是年变了,可能暂时分不清楚。不过没关系,粉色取代了黄色变成了我中意的色调。那又有什么关系,还不是该死的甜美。寒风刮过我的脸,不禁打了一个冷战,边界的禁忌线孤单的旅行者大概也没看见过真正的云彩吧。                                                                                                                                                                ---以上都是我瞎编的

为了完成未完成的计划

建立一个图片分类器 主要针对足球鞋(就是喜欢足球) 部署提供web端服务器 安装在手机上独立运行

基本实现  1.收集图片预处理数据

2.训练模型

3.部署到服务端

4.通过TFlite实现安卓端的实时分类

CHARPT  1

收集图片片,这个对于很多老司机他们有很多方法,对于我们来说其实也不难,Firefox插件 清爽的不得鸟 不过这种方式还是有点累

最后还是决定上爬虫

查看图片元素

一层层拨开

--img  清晰明了的结构

import requests

from bs4 import BeautifulSoup

import urllib.request

用这三个包

r = requests.get(''+str(page)+'.html')

#把html给拿到

content = r.text

soup = BeautifulSoup(content,'lxml')

divs = soup.find_all(class_ = 'viewbox')

#找到viewbox

kind=str(divs[0].a.img)

information=kind.split()

#name=information[1]

#name_r=name[5:]

path_download=information[-1]

path_r=path_download[5:-3]

#拿到图片的下载地址

urllib.request.urlretrieve(path_r,path_save+str(name_r)+'.jpg')

#保存就完事了

page和divs循环一道,遍历整个网页就爬下来了 图片有了,下一步开始选模型啰

模型就inception v4  比较方便的话就用tensorflow slim来训练吧  Inceptionv4 Checkpoint:.gz  需要slim

和tensorflow 源代码

pip install opencv-python

python download_and_convert_data.py \

--dataset_name=my_football_shoes_3 \

--dataset_dir=/home/tomjeans/football_shoes/Inception_v4_slim/img

#转化数据为tf格式

好了  下面开搞  开始训练  进入/models/research/slim文件夹

python  train_image_classifier.py \

--dataset_name=my_football_shoes_3 \

--dataset_dir=/home/tomjeans/football_shoes/Inception_v4_slim/img \

--checkpoint_path=/home/tomjeans/football_shoes/Inception_v4_slim/pre_trained/inception_v4.ckpt \

--model_name=inception_v4 \

--checkpoint_exclude_scopes=InceptionV4/Logits,InceptionV4/AuxLogits/Aux_logits \

--trainable_scopes=InceptionV4/Logits,InceptionV4/AuxLogits/Aux_logits \

--train_dir=/home/tomjeans/football_shoes/Inception_v4_slim/model_trained/2.4 \

--learning_rate=0.001 \

--learning_rate_decay_factor=0.76\

--num_epochs_per_decay=50 \

--moving_average_decay=0.9999 \

--optimizer=adam \

--ignore_missing_vars=True \

--batch_size=32

训练了大概3万多步 转化为pd供其调用

python export_inference_graph.py \

--model_name=inception_v4 \

--output_file=./my_inception_v4.pb \

--dataset_name=my_football_shoes_3 \

--dataset_dir=/home/tomjeans/football_shoes/Inception_v4_slim/img

python  ~/tensorflow/tensorflow/python/tools/freeze_graph.py \

--input_graph=my_inception_v4.pb \

--input_checkpoint=/home/tomjeans/football_shoes/Inception_v4_slim/model_trained/2.4/model.ckpt-36313.data-00000-of-00001 \

--output_graph=./my_inception_v4_freeze.pb \

--input_binary=True \

--output_node_name=InceptionV4/Logits/Predictions

得到这几个东西

cp /home/tomjeans/football_shoes/Inception_v4_slim/img/labels.txt ./my_inception_v4_freeze.label

看看效果

搞定了 部署服务器 安装flask

pip install flask

# coding=utf-8

import importlib

importlib.reload(sys)

#sys.setdefaultencoding("utf-8") pyhon2.x used

import time

from flask import request, send_from_directory

from flask import Flask, request, redirect, url_for

import uuid

import tensorflow as tf

import numpy as np

#from classify_image import run_inference_on_image

from classify_image import NodeLookup

ALLOWED_EXTENSIONS = set(['jpg','JPG', 'jpeg', 'JPEG', 'png'])

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('model_dir', '', """Path to graph_def pb, """)

tf.app.flags.DEFINE_string('model_name', 'my_inception_v4_freeze.pb', '')

tf.app.flags.DEFINE_string('label_file', 'my_inception_v4_freeze.label', '')

tf.app.flags.DEFINE_string('upload_folder', '/tmp/', '')

tf.app.flags.DEFINE_integer('num_top_predictions', 5,

"""Display this many predictions.""")

tf.app.flags.DEFINE_integer('port', '5001',

'server with port,if no port, use deault port 80')

tf.app.flags.DEFINE_boolean('debug', False, '')

UPLOAD_FOLDER = FLAGS.upload_folder

ALLOWED_EXTENSIONS = set(['jpg','JPG', 'jpeg', 'JPEG', 'png'])

app = Flask(__name__)

app._static_folder = UPLOAD_FOLDER

def allowed_files(filename):

return '.' in filename and \

filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS

def rename_filename(old_file_name):

basename = os.path.basename(old_file_name)

name, ext = os.path.splitext(basename)

new_name = str(uuid.uuid1()) + ext

return new_name

def init_graph(model_name=FLAGS.model_name):

with open(model_name, 'rb') as f:

graph_def = tf.GraphDef()

graph_def.ParseFromString(f.read())

_ = tf.import_graph_def(graph_def, name='')

def run_inference_on_image(file_name):

image_data = open(file_name, 'rb').read()

sess = app.sess

softmax_tensor = sess.graph.get_tensor_by_name('InceptionV4/Logits/Predictions:0')

predictions = sess.run(softmax_tensor,

{'input:0': image_data})

predictions = np.squeeze(predictions)

# Creates node ID --> English string lookup.

node_lookup = app.node_lookup

top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]

top_names = []

for node_id in top_k:

human_string = node_lookup.id_to_string(node_id)

top_names.append(human_string)

score = predictions[node_id]

print('id:[%d] name:[%s] (score = %.5f)' % (node_id, human_string, score))

return predictions, top_k, top_names

def inference(file_name):

predictions, top_k, top_names = run_inference_on_image(file_name)

print(predictions)

except Exception as ex:

new_url = '/static/%s' % os.path.basename(file_name)

image_tag = '<img src="%s"></img><p>'

new_tag = image_tag % new_url

format_string = ''

for node_id, human_name in zip(top_k, top_names):

score = predictions[node_id]

format_string += '%s (score:%.5f)<BR>' % (human_name, score)

ret_string = new_tag  + format_string + '<BR>'

return ret_string

@app.route("/", methods=['GET', 'POST'])

def root():

result = """

<!doctype html>

<title>prediction</title>

<h1>传照片吧,少年</h1>

<form action="" method=post enctype=multipart/form-data>

<p><input type=file name=file value='选择图片'>

<input type=submit value='上传'>

</form>

<p>%s</p>

""" % "<br>"

if request.method == 'POST':

file = request.files['file']

old_file_name = file.filename

if file and allowed_files(old_file_name):

filename = rename_filename(old_file_name)

file_path = os.path.join(UPLOAD_FOLDER, filename)

file.save(file_path)

type_name = 'N/A'

print('file saved to %s' % file_path)

start_time = time.time()

out_html = inference(file_path)

duration = time.time() - start_time

print('duration:[%.0fms]' % (duration*1000))

return result + out_html

return result

if __name__ == "__main__":

print('listening on port %d' % FLAGS.port)

init_graph(model_name=FLAGS.model_name)

label_file = FLAGS.label_file

if not FLAGS.label_file:

label_file, _ = os.path.splitext(FLAGS.model_name)

label_file = label_file + '.label'

node_lookup = NodeLookup(label_file)

app.node_lookup = node_lookup

sess = tf.Session()

app.sess = sess

app.run(host='0.0.0.0', port=FLAGS.port, debug=FLAGS.debug, threaded=True)

python  server.py \

--model_name=my_inception_v4_freeze.pb \

--label_file=my_inception_v4_freeze.label \

--upload_folder=/home/tomjeans/football_shoes/Inception_v4_slim/upload_img

下面结果

首先保证tensorflow>=1.9  安装 tensorflow-hub

git clone tensorflow-hub 仓库

cd hub\examples\image_retraining

python retrain.py \

--image_dir /home/tomjeans/hub/examples/image_retraining/my_football_shoes_3 \

--output_graph /home/tomjeans/hub/examples/image_retraining/result/output_graph.pb \

--intermediate_output_graphs_dir /home/tomjeans/hub/examples/image_retraining/result/intermediate_result \

--intermediate_store_frequency 1000 \

--output_labels /home/tomjeans/hub/examples/image_retraining/result/output_labels.txt \

--summaries_dir /home/tomjeans/hub/examples/image_retraining/result/retrain_logs \

--how_many_training_steps 4000 \

--learning_rate 0.01 \

--testing_percentage 10 \

--validation_percentage 10 \

--eval_step_interval 10 \

--train_batch_size 100 \

--test_batch_size -1 \

--validation_batch_size 100 \

--bottleneck_dir /home/tomjeans/hub/examples/image_retraining/result/bottleneck \

--final_tensor_name final_result \

--flip_left_right False \

--random_crop 0 \

--random_scale 0 \

--random_brightness 0 \

这里如果没有科学上网工具是下不下来模型的 转换模型为tflite模式

tflite_convert --/home/tomjeans/hub/examples/image_retraining/result/model_tflite/converted_model.tflite

--graph_def_file=/home/tomjeans/hub/examples/image_retraining/result/output_graph.pb

--input_arrays=Placeholder --output_arrays=final_result

mobilenet v2确实小 才十多m

下载安装android studio

git clone google codelab 仓库

cd tensorflow-for-poets-2

用android studio打开刚刚克隆的tfilte项目 把convertedmodel.tflite 和outputlabels.txt命名为下图并放入assets替换

注意版本 下载对应版本的sdk 和buildtools

好build 打包为apk安装到手机上

在p20 pro上的运行效果。

下面内容请偷偷的看: 翻墙的点点滴滴了 参考这位老哥的 搭建服务器 在本地安装启动客户端

sudo sslocal -c /etc/shadowsocks.json -d start

重点配置全局代理 首先是安装polipo

sudo apt-get install polipo

sudo vim /etc/polipo/config

具体配置请参考网络 配置好后重启生效

sudo /etc/init.d/polipo restart

好,这就算全部了。