猿代码 — 科研/AI模型/高性能计算
0

darknet 源码解释——copy_cpu

摘要: 其实很简单,就是简单的函数拷贝由于与长内存拷贝相比,函数调用并不占用时间,所以写成一个函数能够实现代码复用的目的void copy_cpu(int N, float *X, int INCX, float *Y, int INCY){ int i; for(i = 0; i N; ...
其实很简单,就是简单的函数拷贝
由于与长内存拷贝相比,函数调用并不占用时间,所以写成一个函数能够实现代码复用的目的
void copy_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
    int i;
    for(i = 0; i < N; ++i) Y[i*INCY] = X[i*INCX];
}

而且还可以间隔着拷贝,不知道这样设计的 目的是啥

调用这个函数的代码很多:
activation_layer.c:40: copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1); activation_layer.c:47: copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1); batchnorm_layer.c:137: if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1); batchnorm_layer.c:138: copy_cpu(l.outputs*l.batch, l.output, 1, l.x, 1); batchnorm_layer.c:149: copy_cpu(l.outputs*l.batch, l.output, 1, l.x_norm, 1); batchnorm_layer.c:171: if(l.type == BATCHNORM) copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1); blas.c:226:void copy_cpu(int N, float *X, int INCX, float *Y, int INCY) crnn_layer.c:115: copy_cpu(l.hidden * l.batch, old_state, 1, l.state, 1); crnn_layer.c:146: copy_cpu(l.hidden * l.batch, input_layer.output, 1, l.state, 1); crnn_layer.c:156: copy_cpu(l.hidden * l.batch, input_layer.output - l.hidden*l.batch, 1, l.state, 1); crnn_layer.c:168: copy_cpu(l.hidden*l.batch, self_layer.delta, 1, input_layer.delta, 1); demo.c:120: //copy_cpu(classes, dets[0][i].prob, 1, avg[i].prob, 1); gru_layer.c:150: copy_cpu(l.outputs*l.batch, l.state, 1, l.prev_state, 1); gru_layer.c:164: copy_cpu(l.outputs*l.batch, uz.output, 1, l.z_cpu, 1); gru_layer.c:167: copy_cpu(l.outputs*l.batch, ur.output, 1, l.r_cpu, 1); gru_layer.c:173: copy_cpu(l.outputs*l.batch, l.state, 1, l.forgot_state, 1); gru_layer.c:179: copy_cpu(l.outputs*l.batch, uh.output, 1, l.h_cpu, 1); gru_layer.c:190: copy_cpu(l.outputs*l.batch, l.output, 1, l.state, 1); l2norm_layer.c:39: copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1); local_layer.c:99: copy_cpu(l.outputs, l.biases, 1, l.output + i*l.outputs, 1); logistic_layer.c:40: copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1); lstm_layer.c:197: copy_cpu(l.outputs*l.batch, wf.output, 1, l.f_cpu, 1); lstm_layer.c:200: copy_cpu(l.outputs*l.batch, wi.output, 1, l.i_cpu, 1); lstm_layer.c:203: copy_cpu(l.outputs*l.batch, wg.output, 1, l.g_cpu, 1); lstm_layer.c:206: copy_cpu(l.outputs*l.batch, wo.output, 1, l.o_cpu, 1); lstm_layer.c:214: copy_cpu(l.outputs*l.batch, l.i_cpu, 1, l.temp_cpu, 1); lstm_layer.c:219: copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.h_cpu, 1); lstm_layer.c:223: copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.cell_cpu, 1); lstm_layer.c:224: copy_cpu(l.outputs*l.batch, l.h_cpu, 1, l.output, 1); lstm_layer.c:275: if (i != 0) copy_cpu(l.outputs*l.batch, l.cell_cpu - l.outputs*l.batch, 1, l.prev_cell_cpu, 1); lstm_layer.c:276: copy_cpu(l.outputs*l.batch, l.cell_cpu, 1, l.c_cpu, 1); lstm_layer.c:277: if (i != 0) copy_cpu(l.outputs*l.batch, l.output - l.outputs*l.batch, 1, l.prev_state_cpu, 1); lstm_layer.c:278: copy_cpu(l.outputs*l.batch, l.output, 1, l.h_cpu, 1); lstm_layer.c:282: copy_cpu(l.outputs*l.batch, wf.output, 1, l.f_cpu, 1); lstm_layer.c:285: copy_cpu(l.outputs*l.batch, wi.output, 1, l.i_cpu, 1); lstm_layer.c:288: copy_cpu(l.outputs*l.batch, wg.output, 1, l.g_cpu, 1); lstm_layer.c:291: copy_cpu(l.outputs*l.batch, wo.output, 1, l.o_cpu, 1); lstm_layer.c:299: copy_cpu(l.outputs*l.batch, l.delta, 1, l.temp3_cpu, 1); lstm_layer.c:301: copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.temp_cpu, 1); lstm_layer.c:304: copy_cpu(l.outputs*l.batch, l.temp3_cpu, 1, l.temp2_cpu, 1); lstm_layer.c:310: copy_cpu(l.outputs*l.batch, l.c_cpu, 1, l.temp_cpu, 1); lstm_layer.c:314: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wo.delta, 1); lstm_layer.c:319: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, uo.delta, 1); lstm_layer.c:324: copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1); lstm_layer.c:327: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wg.delta, 1); lstm_layer.c:332: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, ug.delta, 1); lstm_layer.c:337: copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1); lstm_layer.c:340: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wi.delta, 1); lstm_layer.c:345: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, ui.delta, 1); lstm_layer.c:350: copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1); lstm_layer.c:353: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, wf.delta, 1); lstm_layer.c:358: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, uf.delta, 1); lstm_layer.c:363: copy_cpu(l.outputs*l.batch, l.temp2_cpu, 1, l.temp_cpu, 1); lstm_layer.c:365: copy_cpu(l.outputs*l.batch, l.temp_cpu, 1, l.dc_cpu, 1); matrix.c:86: copy_cpu(c.cols, m.vals[i], 1, c.vals[i], 1); normalization_layer.c:86: copy_cpu(w*h, norms + w*h*(k-1), 1, norms + w*h*k, 1); reorg_layer.c:103: copy_cpu(l.inputs, net.input + i*l.inputs, 1, l.output + i*l.outputs, 1); reorg_layer.c:126: copy_cpu(l.inputs, l.delta + i*l.outputs, 1, net.delta + i*l.inputs, 1); rnn_layer.c:113: copy_cpu(l.outputs * l.batch, old_state, 1, l.state, 1); rnn_layer.c:145: copy_cpu(l.outputs * l.batch, input_layer.output, 1, l.state, 1); rnn_layer.c:155: copy_cpu(l.outputs * l.batch, input_layer.output - l.outputs*l.batch, 1, l.state, 1); rnn_layer.c:167: copy_cpu(l.outputs*l.batch, self_layer.delta, 1, input_layer.delta, 1); route_layer.c:83: copy_cpu(input_size, input + j*input_size, 1, l.output + offset + j*l.outputs, 1); shortcut_layer.c:64: copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);

说点什么...

已有0条评论

最新评论...

本文作者
2023-10-26 09:31
  • 0
    粉丝
  • 292
    阅读
  • 0
    回复
资讯幻灯片
热门评论
热门专题
排行榜
Copyright   ©2015-2023   猿代码-超算人才智造局 高性能计算|并行计算|人工智能      ( 京ICP备2021026424号-2 )