基于Keras图像相似度计算孪生网络

基于Keras图像相似度计算孪生网络

(adsbygoogle = window.adsbygoogle || []).push({}); import keras from keras.layers import Input,Dense,Conv2D from keras.layers import MaxPooling2D,Flatten,Convolution2D from keras.models import Model import os import numpy as np from PIL import Image from keras.optimizers import SGD from scipy import misc root_path = os.getcwd() train_names = ['bear','blackswan','bus','camel','car','cows','dance','dog','hike','hoc','kite','lucia','mallerd','pigs','soapbox','stro','surf','swing','train','walking'] test_names = ['boat','dance-jump','drift-turn','elephant','libby'] def load_data(seq_names,data_number,seq_len): #生成图片对 print('loading data.....') frame_num = 51 train_data1 = [] train_data2 = [] train_lab = [] count =
Keras 猫狗二分类

Keras 猫狗二分类

(adsbygoogle = window.adsbygoogle || []).push({}); import keras from keras.models import Sequential from keras.layers import Dense,MaxPooling2D,Input,Flatten,Convolution2D,Dropout,GlobalAveragePooling2D from keras.optimizers import SGD from keras.callbacks import TensorBoard,ModelCheckpoint from PIL import Image import os import numpy as np from scipy import misc root_path = os.getcwd() def load_data(): tran_imags = [] labels = [] seq_names = ['cat','dog'] for seq_name in seq_names: frames = sorted(os.listdir(os.path.join(root_path,'data','train_data', seq_name))) for frame in frames: imgs = [os.path.join(root_path, 'data', 'train_data', seq_name, frame)]
Imagenet数据集类别标签和对应的英文中文对照表

Imagenet数据集类别标签和对应的英文中文对照表

(adsbygoogle = window.adsbygoogle || []).push({}); 预测结果输出one-hot类型,最大概率的下标即为对于类别号 0: 'tench, Tinca tinca', 丁鲷(鱼) 1: 'goldfish, Carassius auratus', 金鱼,鲫鱼 2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias', 大白鲨 3: 'tiger shark,
图像联通区域标记

图像联通区域标记

(adsbygoogle = window.adsbygoogle || []).push({}); ###由于最近做实验用到二值图像连通区域(八连通)标记,刚开始的时候为了验证算法有效性,用了递归的方法(太慢了,而且图像一大就容易
Hadoop命令记录

Hadoop命令记录

(adsbygoogle = window.adsbygoogle || []).push({}); 1.列出指定目录下文件 -ls haddop fs -ls /dir haddop fs -ls -R /dir 2.将本地文件放到hdfs文件系统中 -put hadoop fs -put <local file> <hdfs file> hadoop fs -put <local file or dir> <hdfs dir> #将键盘输入录入到