consistently grab regression coefficients when coefficients can be NA
By : Dmitriy Grudin
Date : March 29 2020, 07:55 AM
this one helps. Consider the two data.frames below. In each case I want to extract the intercept, and slopes for the three variables from the associated models. , Grab them directly from the model. No need for using summary(): code :
> model2$coefficients
(Intercept) x x2 x3
0.9309032 0.8736204 NA 0.5493671

How do I create a fitted value with a subset of regression coefficients in place of all coefficients?
By : user3395879
Date : March 29 2020, 07:55 AM
I hope this helps . Edit 20150329: Use the original method on one subset of interactions, but retain others A great advantage of your original method is that it can handle interactions of any complexity. The major defect is that it won't ignore interactions that you want to keep in the model. But if you use xi to create these, # won't appear in their names. code :
sysuse auto, clear
recode rep78 1 = 2 //combine small categories
xi, prefix("") i.rep78*mpg // mpg*i.rep78 won't work
des _I*
reg price mpg foreign c.mpg#foreign _I* headroom trunk
matrix betas = e(b)
local names: colnames betas
foreach name of local names {
if strpos("`name'", "#") > 0 {
scalar define col_idx = colnumb(betas, "`name'")
matrix betas[1, col_idx] = 0
}
matrix score fit_sans_mpgXforeign = betas
sysuse auto, clear
gen intx = c.mpg#foreign
reg price mpg foreign i.rep78 headroom trunk intx
predict mhat
gen fitted_sans_interaction = mhat _b[intx]*intx
sysuse auto, clear
xi: gen intx = c.mpg#foreign
reg price mpg foreign i.rep78 headroom trunk intx
predict mhat
gen fitted_sans_interaction = mhat _b[intx]*intx
sysuse auto, clear
xi: gen intx = c.mpg#foreign
reg price c.mpg##foreign i.rep78 headroom trunk intx
predict mhat
gen fitted_sans_interaction = mhat _b[intx]*intx

Replicate a regression using a random subset of data each time and check distribution of regression coefficients?
By : KKN
Date : November 23 2020, 11:01 PM
may help you . What are you expecting to get by sampling from a fitted linear model object? code :
sample(model[i], size=300)
f < function () {
fit < lm(price ~ mileage, data = dat, subset = sample(nrow(dat), 300))
coef(fit)
}
z < t(replicate(2000, f()))
f < function () {
fit < lm(dist ~ speed, data = cars, subset = sample(nrow(cars), 30))
coef(fit)
}
set.seed(0); f()
#(Intercept) speed
# 22.69112 4.18617
set.seed(0); z < t(replicate(50, f()))
head(z) ## show first few rows
# (Intercept) speed
#[1,] 22.69112 4.186170
#[2,] 21.31613 4.317624
#[3,] 12.98734 3.454305
#[4,] 22.59920 4.274417
#[5,] 22.53475 4.584875
#[6,] 18.88185 4.104758
par(mfrow = c(1,2))
hist(z[,1], main = "intercept")
hist(z[,2], main = "slope")

Obtaining regression coefficients from reduced major axis regression models using lmodel2 package
By : Filip Gačić
Date : March 29 2020, 07:55 AM
Any of those help I have a large data set with which I'm undertaking many regression analyses. I'm using a reduced major axis regression with r's lmodel2 package. What I need to do is extract the regression coefficients (rsquared, pvalues, slope and intercept) from the RMA models. I can do this easily enough with the OLS regressions using: , What about this? code :
# making data reproducable
data < read.table(text = "x y
0.440895993 227.7
0.294277869 296.85
0.171754892 298.05
0 427.65
0.210884179 215.55
0.053238011 293.7
0.105395366 127.9
0.463933834 229.5
0 165.45
0.482128605 192.15
0.247341039 266.9
0 349.35
0.198833301 185.05
0.170786027 203.85
0.269818315 207.05
0.129543682 222.75
0.441665334 251.35
0 262.8
0.517974685 107.05
0.446336968 191.6", header = TRUE)
#estimate model
library(lmodel2)
mod_2 < lmodel2(y ~ x, data = data, "interval", "interval", 99) # 99% ci
# view summary
summary(mod_2)
# Length Class Mode
# y 20 none numeric
# x 20 none numeric
# regression.results 5 data.frame list
# confidence.intervals 5 data.frame list
# eigenvalues 2 none numeric
# H 1 none numeric
# n 1 none numeric
# r 1 none numeric
# rsquare 1 none numeric
# P.param 1 none numeric
# theta 1 none numeric
# nperm 1 none numeric
# epsilon 1 none numeric
# info.slope 1 none numeric
# info.CI 1 none numeric
# call 6 none call
# Getting r squared
(RSQ < mod_2$rsquare)
# [1] 0.1855163
mod_2$regression.results
# Method Intercept Slope Angle (degrees) Pperm (1tailed)
# 1 OLS 277.2264 177.0317 89.67636 0.04
# 2 MA 457.7304 954.2606 89.93996 0.04
# 3 SMA 331.5673 411.0173 89.86060 NA
# 4 RMA 296.6245 260.5577 89.78010 0.04
# wanted results from the RMA model
(INT < mod_2$regression.results[[2]][4])
# [1] 296.6245
(SLOPE < mod_2$regression.results[[3]][4])
# [1] 260.5577
(PVAL < mod_2$regression.results[[5]][4])
# [1] 0.04
# Combined together in a data frame:
data.frame(RMA = rbind(INT, SLOPE, PVAL))
# RMA
# INT 296.6245
# SLOPE 260.5577
# PVAL 0.0400

Using sklearn linear regression, how can I constrain the calculated regression coefficients to be greater than 0?
By : NoviceMe
Date : March 29 2020, 07:55 AM
will help you sklearn is just wrapping scipy's lstsq which does not support this. You can easily modify sklearn's code though: code :
if sp.issparse(X):
if y.ndim < 2:
out = sparse_lsqr(X, y)
self.coef_ = out[0]
self._residues = out[3]
else:
# sparse_lstsq cannot handle y with shape (M, K)
outs = Parallel(n_jobs=n_jobs_)(
delayed(sparse_lsqr)(X, y[:, j].ravel())
for j in range(y.shape[1]))
self.coef_ = np.vstack(out[0] for out in outs)
self._residues = np.vstack(out[3] for out in outs)
else:
self.coef_, self._residues, self.rank_, self.singular_ = \
linalg.lstsq(X, y)
self.coef_ = self.coef_.T

