?這幾天想搞明白tensorflow的架構(gòu)慧邮,從網(wǎng)上搜集了一些資料铅匹。質(zhì)量參差不齊栅受,我把覺得質(zhì)量不錯(cuò)的附錄在這里株灸。
Tensorflow源碼解析2 -- 前后端連接的橋梁 - Session
?session->Run以DirectSession中的實(shí)現(xiàn)。
Status DirectSession::Run(const RunOptions& run_options,
const NamedTensorList& inputs,
const std::vector<string>& output_names,
const std::vector<string>& target_nodes,
std::vector<Tensor>* outputs,
RunMetadata* run_metadata,
const thread::ThreadPoolOptions& threadpool_options) {
for (const auto& it : inputs) {
input_tensor_names.push_back(it.first);
input_size += it.second.AllocatedBytes();
}
TF_RETURN_IF_ERROR(GetOrCreateExecutors(input_tensor_names, output_names,
target_nodes, &executors_and_keys,
&run_state_args));
}
GetOrCreateExecutors的分析可以參考下面的博客:
TensorFlow 拆包(三):Graph 和 Node
Status DirectSession::GetOrCreateExecutors(
gtl::ArraySlice<string> inputs, gtl::ArraySlice<string> outputs,
gtl::ArraySlice<string> target_nodes, ExecutorsAndKeys** executors_and_keys,
RunStateArgs* run_state_args) {
CallableOptions callable_options;
callable_options.mutable_feed()->Reserve(inputs_sorted.size());
for (const string& input : inputs_sorted) {
callable_options.add_feed(input);
}
callable_options.mutable_fetch()->Reserve(outputs_sorted.size());
for (const string& output : outputs_sorted) {
callable_options.add_fetch(output);
}
TF_RETURN_IF_ERROR(
CreateExecutors(callable_options, &ek, &func_info, run_state_args));
}
Status DirectSession::CreateExecutors(
const CallableOptions& callable_options,
std::unique_ptr<ExecutorsAndKeys>* out_executors_and_keys,
std::unique_ptr<FunctionInfo>* out_func_info,
RunStateArgs* run_state_args) {
std::unordered_map<string, std::unique_ptr<Graph>> graphs;
TF_RETURN_IF_ERROR(CreateGraphs(
options, &graphs, &func_info->flib_def, run_state_args, &ek->input_types,
&ek->output_types, &ek->collective_graph_key));
}
Status DirectSession::CreateGraphs(
const BuildGraphOptions& subgraph_options,
std::unordered_map<string, std::unique_ptr<Graph>>* outputs,
std::unique_ptr<FunctionLibraryDefinition>* flib_def,
RunStateArgs* run_state_args, DataTypeVector* input_types,
DataTypeVector* output_types, int64* collective_graph_key) {
std::unordered_map<string, GraphDef> partitions;
TF_RETURN_IF_ERROR(Partition(popts, &client_graph->graph, &partitions));
}
Status Partition(const PartitionOptions& opts, Graph* g,
std::unordered_map<string, GraphDef>* partitions) { }
?Executor的分析拉鹃,可以參考:
TensorFlow Executor解析
?其他一些值得一讀的博客:
TensorFlow的自動(dòng)求導(dǎo)具體是在哪部分代碼里實(shí)現(xiàn)的?
動(dòng)手實(shí)現(xiàn)TensorFlow--反向傳播Backpropagation
實(shí)現(xiàn)屬于自己的TensorFlow(三) - 反向傳播與梯度下降實(shí)現(xiàn)
Tensorflow compute_gradirnts和apply_gradients原理淺析
tensorflow optimizer源碼閱讀筆記
TensorFlow優(yōu)化器淺析 反向傳播圖
TensorFlow中的Placement啟發(fā)式算法模塊——Placer