明天就是520了,不知道各位有沒有都由單身dog進(jìn)化為秀恩愛dog彪笼。
在我大三的時(shí)候句伶,就有一個(gè)計(jì)算機(jī)的朋友用自己做的代碼感動(dòng)了一個(gè)數(shù)學(xué)系的女生。就是下面這種纹安。(這個(gè)愛心是有運(yùn)行結(jié)果的S热琛)也不知道為什么妹子放棄了全班30多位帥哥(沒錯(cuò),他們班就她一個(gè)女生)钻蔑,而選擇了頭發(fā)日漸稀疏已經(jīng)“六月懷胎”的我朋友。
今天奸鸯,我就來教大家一下咪笑,如何用Python做一份特別的禮物送給自己的戀人。學(xué)習(xí)資料也可以加下Python扣扣裙:四八三五四六四一六自己下載學(xué)習(xí)下
當(dāng)然了娄涩,如果還是單身的窗怒,也可以把這個(gè)作為表白神器映跟,和心愛的人表白。
會(huì)Python編程的人當(dāng)然不用我說扬虚,就知道該如何操作努隙,那些不懂編程的人,如果想嘗試辜昵,那該怎么辦呢荸镊?
這里我特地制作了小程序,微信后臺(tái)私信“小程序”即可獲取堪置。運(yùn)行就可以了躬存。
懂編程的就看下面的吧!送上這份禮物之后舀锨,保證你明晚.....巫山云雨后岭洲,天氣晚來秋啊坎匿!
01
首先教大家一個(gè)初級版的盾剩。這個(gè)就比較簡單,利用Python制作一個(gè)愛心替蔬。
我先把代碼給貼出來:
import turtle
import time
# 畫愛心的頂部
def LittleHeart:
for i in range (200):
turtle.right(1)
turtle.forward(2)
# 輸入表白的語句告私,默認(rèn)I Love you
love=input('Please enter a sentence of love, otherwise the default is "I Love you": ')
#輸入署名或者贈(zèng)誰,沒有不執(zhí)行
me=input('Please enter pen name, otherwise the default do not execute: ')
if love=='':
love='I Love you'
# 窗口大小
turtle.setup(width=900, height=500)
# 顏色
turtle.color('red','pink')
# 筆粗細(xì)
turtle.pensize(3)
# 速度
turtle.speed(1)
# 提筆
turtle.up
# 隱藏筆
turtle.hideturtle
# 去到的坐標(biāo),窗口中心為0,0
turtle.goto(0,-180)
turtle.showturtle
# 畫上線
turtle.down
turtle.speed(1)
turtle.begin_fill
turtle.left(140)
turtle.forward(224)
#調(diào)用畫愛心左邊的頂部
LittleHeart
#調(diào)用畫愛右邊的頂部
turtle.left(120)
LittleHeart
# 畫下線
turtle.forward(224)
turtle.end_fill
turtle.pensize(5)
# 在心中寫字 一次
turtle.goto(0,0)
turtle.showturtle
turtle.color('#CD5C5C','pink')
#在心中寫字 font可以設(shè)置字體自己電腦有的都可以設(shè) align開始寫字的位置
turtle.write(love,font=('gungsuh',30,),align="center")
time.sleep(2)
# 在心中寫字 二次
# 寫署名
if me !='':
turtle.color('black', 'pink')
time.sleep(2)
turtle.goto(180,-180)
turtle.showturtle
turtle.write(me, font=(20,), align="center", move=True)
#點(diǎn)擊窗口關(guān)閉
window=turtle.Screen
window.exitonclick
這個(gè)代碼最終呈現(xiàn)效果如下进栽,這個(gè)是比較初級簡單的愛心德挣,沒有什么高難度。你也可以把代碼擴(kuò)充一下快毛,整的更加高大上一些格嗅。
如果你覺得這個(gè)還不夠酷炫,那我好人做到底唠帝,幫你制作一個(gè)表白愛心樹:
import turtle
import random
def love(x, y): # 在(x,y)處畫愛心lalala
lv = turtle.Turtle
lv.hideturtle
lv.up
lv.goto(x, y) # 定位到(x,y)
def curvemove: # 畫圓弧
for i in range(20):
lv.right(10)
lv.forward(2)
lv.color('red', 'pink')
lv.speed(10000000)
lv.pensize(1)
# 開始畫愛心lalala
lv.down
lv.begin_fill
lv.left(140)
lv.forward(22)
curvemove
lv.left(120)
lv.write("WM", font=("Arial", 12, "normal"), align="center") # 寫上表白的人的名字
lv.left(140) # 畫完復(fù)位
lv.end_fill
def tree(branchLen, t):
if branchLen > 5: # 剩余樹枝太少要結(jié)束遞歸
if branchLen < 20: # 如果樹枝剩余長度較短則變綠
t.color("green")
t.pensize(random.uniform((branchLen + 5) / 4 - 2, (branchLen + 6) / 4 + 5))
t.down
t.forward(branchLen)
love(t.xcor, t.ycor) # 傳輸現(xiàn)在turtle的坐標(biāo)
t.up
t.backward(branchLen)
t.color("brown")
return
# 以下遞歸
ang = random.uniform(15, 45)
t.right(ang)
tree(branchLen - random.uniform(12, 16), t) # 隨機(jī)決定減小長度
t.left(2 * ang)
myWin = turtle.Screen
t = turtle.Turtle
t.hideturtle
t.speed(1000)
t.left(90)
t.up
t.backward(200)
t.pensize(32)
t.forward(60)
tree(100, t)
myWin.exitonclick
圖中的“WM”是可以改的屯掖!看到代碼里的“WM”兩個(gè)字了沒?直接把這兩個(gè)字母替換成你心上人的名字就好襟衰。中文英文都可以贴铜。
想學(xué)習(xí)編程的話可以關(guān)注微信公眾號:程序員大牛
02
除了上面這個(gè),你還可以利用Python畫畫啊瀑晒,制作一份獨(dú)一無二的畫作送給自己的心上人绍坝。
這個(gè)可和美圖濾鏡不一樣,可以利用深度學(xué)習(xí)copy世界任何名畫苔悦、甚至任何圖片轩褐,給你愛人制作一份獨(dú)一無二的畫像。
在這里玖详,我教你兩個(gè)畫畫的方式把介,各位瞧好了勤讽,看我大展身手。
先第一個(gè):畫像重疊拗踢。
我們先選擇兩幅畫脚牍,你們也可以一幅選你們心上人的畫像,一幅選擇風(fēng)景或者其他巢墅。這個(gè)時(shí)候就看各位的審美了诸狭。這里我選擇的都是風(fēng)景。
首先砂缩,第一步:
print('---Please put the picture in this file-----0
img1_addres=input("Please enter the first picture's name plus the file suffix: ")
img2_address=input("Please enter the name of the second picture plus the file suffix: ")
percent1=input('Please enter the first picture to display the weight, the default is 0.5: ')
percent2=input('Please enter the second picture to show the proportion, the default is 0.5: ')
merge2(img1_addres,img2_address,percent1,percent2)
然后自行設(shè)置顯示比作谚,如果沒有輸入,就會(huì)按照默認(rèn)設(shè)置進(jìn)行:
# 如果兩張圖片中有一張沒有輸入顯示比庵芭,默認(rèn)設(shè)置為0.5
if percent1=='' or percent2=='':
percent1 = 0.50
percent2 = 0.50
獲取圖片地址:
# 獲取圖片的地址
img1=Image.open(img1_address)
img2=Image.open(img2_address)
然后再第三步:
# 讓兩張圖片的顯示比相加等于1
if percent1+percent2!=1:
percent2=1-percent1
再獲取圖片寬高:
# 獲取圖片的最小寬高
width = min(img1.size[0],img2.size[0])
height = min(img1.size[1],img2.size[1])
img_new = Image.new('RGB',(width,height))
這時(shí)候渲染圖片:
# 渲染圖片
for x in range(width):
for y in range(height):
r1,g1,b1=img1.getpixel((x,y))
r2,g2,b2=img2.getpixel((x,y))
r=int(percent1*r1+percent2*r2)
g=int(percent1*g1+percent2*g2)
b=int(percent1*b1+percent2*b2)
img_new.putpixel((x,y),(r,g,b))
最后保存就好了妹懒!
# 保存圖片
img_new.save('new.jpg')
大家可以看一下渲染效果:
是不是特別好看,有世界名畫的感覺双吆。特別有莫奈的印象畫風(fēng)格眨唬。你可以利用這個(gè)代碼給你心上人做一幅特別而好看的畫像。這個(gè)我也是幫各位做好了小程序好乐。
第二個(gè)是圖像渲染:
通過Python的深度學(xué)習(xí)算法包去訓(xùn)練計(jì)算機(jī)模仿世界名畫的風(fēng)格匾竿,然后應(yīng)用到另一幅畫中!
這個(gè)就沒有小程序了蔚万。因?yàn)檫@個(gè)有幾百個(gè)依賴包岭妖。
專業(yè)難度比較高一些,首先反璃,需要安裝使用的模塊昵慌,pip一鍵搞定:
pip3 install keras
pip3 install h5py
pip3 install tensorflow
TensorFlow的安裝可能不翻墻的話下載的比較慢,也可以源碼安裝淮蜈。自己把握斋攀。(TensorFlow只能python3.5安裝,所以先下載一個(gè)3.5版本的)
然后再下載VGG16模型梧田。把代碼生成py格式和需要渲染圖片放在同一個(gè)文件夾淳蔼。
這個(gè)是世界名畫蒙娜麗莎,以這張圖片為模板裁眯,讓計(jì)算機(jī)去學(xué)習(xí)這張圖片的風(fēng)格應(yīng)用到自己的這張圖片上鹉梨。
我先把代碼貼出來(這個(gè)代碼是根據(jù)知乎大佬:楊航鋒的代碼修改而成):
from __future__ import print_function
from keras.preprocessing.image import load_img, img_to_array
from scipy.misc import imsave
import numpy as np
import time
import argparse
from keras.applications import vgg16
from keras import backend as K
from scipy.optimize.lbfgsb import fmin_l_bfgs_b
parser = argparse.ArgumentParser(description='Neural style transfer with Keras.')
parser.add_argument('base_image_path', metavar='base', type=str,help='Path to the image to transform.')
parser.add_argument('style_reference_image_path', metavar='ref', type=str, help='Path to the style reference image.')
parser.add_argument('result_prefix', metavar='res_prefix', type=str,help='Prefix for the saved results.')
parser.add_argument('--iter', type=int, default=15, required=False,help='Number of iterations to run.')
parser.add_argument('--content_weight', type=float, default=0.025, required=False,help='Content weight.')
parser.add_argument('--style_weight', type=float, default=1.0, required=False,help='Style weight.')
parser.add_argument('--tv_weight', type=float, default=1.0, required=False,help='Total Variation weight.')
args = parser.parse_args
base_image_path = args.base_image_path
style_reference_image_path = args.style_reference_image_path
result_prefix = args.result_prefix
iterations = args.iter
# 不同損失分量的權(quán)重
total_variation_weight = args.tv_weight
style_weight = args.style_weight
content_weight = args.content_weight
# 生成圖片的尺寸
width, height = load_img(base_image_path).size
img_nrows = 400
img_ncols = int(width * img_nrows / height)
# util function to open, 調(diào)整和格式化圖片到適當(dāng)?shù)膹埩?/p>
def preprocess_image(image_path):
img = load_img(image_path, target_size=(img_nrows, img_ncols))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg16.preprocess_input(img)
return img
# util函數(shù)將一個(gè)張量轉(zhuǎn)換成一個(gè)有效的圖像
def deprocess_image(x):
if K.image_data_format == 'channels_first':
x = x.reshape((3, img_nrows, img_ncols))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
# 用平均像素去除零中心
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB' 轉(zhuǎn)換
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
# get tensor representations of our images
# 得到圖像的張量表示
base_image = K.variable(preprocess_image(base_image_path))
style_reference_image = K.variable(preprocess_image(style_reference_image_path))
# this will contain our generated image
# 包含我們生成的圖片
if K.image_data_format == 'channels_first':
combination_image = K.placeholder((1, 3, img_nrows, img_ncols))
else:
combination_image = K.placeholder((1, img_nrows, img_ncols, 3))
# combine the 3 images into a single Keras tensor
# 將3個(gè)圖像合并成一個(gè)Keras張量
input_tensor = K.concatenate([base_image,
style_reference_image,
combination_image], axis=0)
# build the VGG16 network with our 3 images as input
# the model will be loaded with pre-trained ImageNet weights
# 以我們的3個(gè)圖像作為輸入構(gòu)建VGG16網(wǎng)絡(luò)
# 該模型將加載預(yù)先訓(xùn)練的ImageNet權(quán)重
model = vgg16.VGG16(input_tensor=input_tensor,
weights='imagenet', include_top=False)
print('Model loaded.')
# get the symbolic outputs of each "key" layer (we gave them unique names).
# 獲取每個(gè)“鍵”層的符號輸出(我們給它們?nèi)×宋ㄒ坏拿Q)
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# compute the neural style loss
# 計(jì)算神經(jīng)類型的損失
# first we need to define 4 util functions
# 首先我們需要定義是個(gè)until函數(shù)
# the gram matrix of an image tensor (feature-wise outer product)
# 圖像張量的克矩陣
def gram_matrix(x):
assert K.ndim(x) == 3
if K.image_data_format == 'channels_first':
features = K.batch_flatten(x)
else:
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
# the "style loss" is designed to maintain
# 風(fēng)格損失”是為了維護(hù)而設(shè)計(jì)的
# the style of the reference image in the generated image.
# 生成圖像中引用圖像的樣式
# It is based on the gram matrices (which capture style) of feature maps from the style reference image and from the generated image
# 它基于從樣式引用圖像和生成的圖像中獲取特征映射的gram矩陣(捕獲樣式)
def style_loss(style, combination):
assert K.ndim(style) == 3
assert K.ndim(combination) == 3
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))
# an auxiliary loss function
# 一個(gè)輔助的損失函數(shù)
# designed to maintain the "content" of the base image in the generated image
#設(shè)計(jì)用于維護(hù)生成圖像中基本圖像的“內(nèi)容
def content_loss(base, combination):
return K.sum(K.square(combination - base))
# the 3rd loss function, total variation loss,designed to keep the generated image locally coherent
# 第三個(gè)損失函數(shù),總變異損失穿稳,設(shè)計(jì)來保持生成的圖像局部一致
def total_variation_loss(x):
assert K.ndim(x) == 4
if K.image_data_format == 'channels_first':
a = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, 1:, :img_ncols - 1])
b = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] - x[:, :, :img_nrows - 1, 1:])
else:
a = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, 1:, :img_ncols - 1, :])
b = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] - x[:, :img_nrows - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# combine these loss functions into a single scalar
# 將這些損失函數(shù)合并成一個(gè)標(biāo)量存皂。
loss = K.variable(0.)
layer_features = outputs_dict['block4_conv2']
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss += content_weight * content_loss(base_image_features,
combination_features)
feature_layers = ['block1_conv1', 'block2_conv1',
'block3_conv1', 'block4_conv1',
'block5_conv1']
for layer_name in feature_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(feature_layers)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)
# get the gradients of the generated image wrt the loss
# 得到所生成圖像的梯度,并對損失進(jìn)行wrt司草。
grads = K.gradients(loss, combination_image)
outputs = [loss]
if isinstance(grads, (list, tuple)):
outputs += grads
else:
outputs.append(grads)
f_outputs = K.function([combination_image], outputs)
def eval_loss_and_grads(x):
if K.image_data_format == 'channels_first':
x = x.reshape((1, 3, img_nrows, img_ncols))
else:
x = x.reshape((1, img_nrows, img_ncols, 3))
outs = f_outputs([x])
loss_value = outs[0]
if len(outs[1:]) == 1:
grad_values = outs[1].flatten.astype('float64')
else:
grad_values = np.array(outs[1:]).flatten.astype('float64')
return loss_value, grad_values
"""
this Evaluator class makes it possible
to compute loss and gradients in one pass
while retrieving them via two separate functions,
"loss" and "grads". This is done because scipy.optimize
requires separate functions for loss and gradients,
but computing them separately would be inefficient.
這個(gè)評估器類使它成為可能艰垂。
在一個(gè)通道中計(jì)算損耗和梯度。
當(dāng)通過兩個(gè)不同的函數(shù)檢索它們時(shí)埋虹,
“損失”和“梯度”猜憎。這是因?yàn)閟cipy.optimize
要求分離的函數(shù)用于損失和梯度,
但是單獨(dú)計(jì)算它們將是低效的
"""
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
loss_value, grad_values = eval_loss_and_grads(x)
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator
# run scipy-based optimization (L-BFGS) over the pixels of the generated image
# 運(yùn)行 scipy-based optimization (L-BFGS) 覆蓋 生成的圖像的像素
# so as to minimize the neural style loss
# 這樣可以減少神經(jīng)類型的損失
if K.image_data_format == 'channels_first':
x = np.random.uniform(0, 255, (1, 3, img_nrows, img_ncols)) - 128.
else:
x = np.random.uniform(0, 255, (1, img_nrows, img_ncols, 3)) - 128.
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten,
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# save current generated image
img = deprocess_image(x.copy)
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
end_time = time.time
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
它會(huì)有一個(gè)不斷漸進(jìn)渲染的過程:
雖然我有老婆搔课,而且我老婆特別好看胰柑,漂亮。但是為了不傷害到你們爬泥,我就用萬門的新起點(diǎn)嘉園大樓渲染一下莫奈的名畫柬讨。給你們具體看一下。
看起來還是非常有質(zhì)感的袍啡,代碼貼在這里踩官,我相信有很多人一定會(huì)比我有創(chuàng)意。做出來的圖一定會(huì)獨(dú)一無二境输。比如下面這個(gè)就做的特別好看蔗牡。你們可以嘗試用人像渲染名畫,出來的效果真心非常美麗好看而且獨(dú)具個(gè)性嗅剖!
但是辩越,如果審美觀比較差的,還是要問問旁邊的人信粮,以下就是我朋友用劉亦菲做的失敗案例黔攒。
他是用劉亦菲照片去模仿《無名女郎》的風(fēng)格,把照片糊了一片强缘,我把照片發(fā)出來給大家笑笑......
其實(shí)督惰,只要是自己用心做出的禮物,你喜歡的人一定會(huì)非常感動(dòng)欺旧。
所以誰說程序員不浪漫了姑丑!只要真的喜歡一個(gè)妹子,會(huì)為她做盡溫暖之事辞友!哪怕是用自己的方式栅哀。追妹子,只要用心称龙,她都會(huì)被感動(dòng)留拾。
愛情沒有性價(jià)比,沒有風(fēng)險(xiǎn)控制鲫尊,但只要你盡了力付出過痴柔,就不會(huì)后悔。
愿每一個(gè)渴望戀愛的人都能在520這天找到自己的心有所屬疫向。