How to estimate Inference time from average forward pass time in caffe?
By : dennice martinez
Date : March 29 2020, 07:55 AM
should help you out The average forward pass time is the time it takes to propagate one batch of inputs from the input ("data") layer to the output layer. The batch size specified in your models/own_xx/deploy.prototxt file will determine how many images are processed per batch. For instance, if I run the default command that comes with Caffe: code :
build/tools/caffe time model=models/bvlc_alexnet/deploy.prototxt gpu=0
...
I0426 13:07:32.701490 30417 layer_factory.hpp:77] Creating layer data
I0426 13:07:32.701513 30417 net.cpp:91] Creating Layer data
I0426 13:07:32.701529 30417 net.cpp:399] data > data
I0426 13:07:32.709048 30417 net.cpp:141] Setting up data
I0426 13:07:32.709079 30417 net.cpp:148] Top shape: 10 3 227 227 (1545870)
I0426 13:07:32.709084 30417 net.cpp:156] Memory required for data: 6183480
...
I0426 13:07:34.390281 30417 caffe.cpp:377] Average Forward pass: 16.7818 ms.
I0426 13:07:34.390290 30417 caffe.cpp:379] Average Backward pass: 12.923 ms.
I0426 13:07:34.390296 30417 caffe.cpp:381] Average ForwardBackward: 29.7969 ms.
I0426 13:07:32.709079 30417 net.cpp:148] Top shape: 10 3 227 227 (1545870)
name: "AlexNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
}
layer { ...

Bootstrap parameter estimate of nonlinear optimization in R: Why is it different than the regular parameter estimate?
By : Siva Prasad
Date : March 29 2020, 07:55 AM
seems to work fine First of all, you have a very small number of values, possibly too few to trust the bootstrap method. Then a high proportion of fits fails for the classic bootstrap, because due to the resampling you often have not enough distinct x values. Here is an implementation using nls with a selfstarting model and the boot package. code :
doy < c(156,205,228,276,319,380)
len < c(36,56,60,68,68,71)
data06 < data.frame(doy,len)
plot(len ~ doy, data = data06)
fit < nls(len ~ SSasympOff(doy, Asym, lrc, c0), data = data06)
summary(fit)
#profiling CI
proCI < confint(fit)
# 2.5% 97.5%
#Asym 68.290477 75.922174
#lrc 4.453895 3.779994
#c0 94.777335 126.112523
curve(predict(fit, newdata = data.frame(doy = x)), add = TRUE)
#classic bootstrap
library(boot)
set.seed(42)
boot1 < boot(data06, function(DF, i) {
tryCatch(coef(nls(len ~ SSasympOff(doy, Asym, lrc, c0), data = DF[i,])),
error = function(e) c(Asym = NA, lrc = NA, c0 = NA))
}, R = 1e3)
#proportion of unsuccessful fits
mean(is.na(boot1$t[, 1]))
#[1] 0.256
#bootstrap CI
boot1CI < apply(boot1$t, 2, quantile, probs = c(0.025, 0.5, 0.975), na.rm = TRUE)
# [,1] [,2] [,3]
#2.5% 69.70360 4.562608 67.60152
#50% 71.56527 4.100148 113.9287
#97.5% 74.79921 3.697461 151.03541
#bootstrap of the residuals
data06$res < residuals(fit)
data06$fit < fitted(fit)
set.seed(42)
boot2 < boot(data06, function(DF, i) {
DF$lenboot < DF$fit + DF[i, "res"]
tryCatch(coef(nls(lenboot ~ SSasympOff(doy, Asym, lrc, c0), data = DF)),
error = function(e) c(Asym = NA, lrc = NA, c0 = NA))
}, R = 1e3)
#proportion of unsuccessful fits
mean(is.na(boot2$t[, 1]))
#[1] 0
#(residuals) bootstrap CI
boot2CI < apply(boot2$t, 2, quantile, probs = c(0.025, 0.5, 0.975), na.rm = TRUE)
# [,1] [,2] [,3]
#2.5% 70.19380 4.255165 106.3136
#50% 71.56527 4.100148 113.9287
#97.5% 73.37461 3.969012 119.2380
proCI[2,1]
CIs_k < data.frame(lwr = c(exp(proCI[2, 1]),
exp(boot1CI[1, 2]),
exp(boot2CI[1, 2])),
upr = c(exp(proCI[2, 2]),
exp(boot1CI[3, 2]),
exp(boot2CI[3, 2])),
med = c(NA,
exp(boot1CI[2, 2]),
exp(boot2CI[2, 2])),
estimate = exp(coef(fit)[2]),
method = c("profile", "boot", "boot res"))
library(ggplot2)
ggplot(CIs_k, aes(y = estimate, ymin = lwr, ymax = upr, x = method)) +
geom_errorbar() +
geom_point(aes(color = "estimate"), size = 5) +
geom_point(aes(y = med, color = "boot median"), size = 5) +
ylab("k") + xlab("") +
scale_color_brewer(name = "", type = "qual", palette = 2) +
theme_bw(base_size = 22)

Excel, Increasing Time Series find the average time between adjacent points
By : Vittawat Sangphung
Date : March 29 2020, 07:55 AM
should help you out I have a list as follows , One formula: code :
=SUMPRODUCT(AVERAGE(J2:INDEX(J:J,MATCH(1E+99,J:J))J1:INDEX(J:J,MATCH(1E+99,J:J)1)))

How to estimate the averagecase complexity given the input size and average time?
By : swissjava
Date : March 29 2020, 07:55 AM

how to estimate best, worst and average cases for time complexity?
By : Pravin Gosavi
Date : March 29 2020, 07:55 AM
this will help first note : t(n) = 2n^2 + 3n 1 will always be a big O(n^2) in worst, best and average case. In some cases the complexity depends on the input of your algorithm, in these cases usually people calculate the complexity in worst case.

