model: R xi( t ) ( t ) dt = ⇥R xi( t ) 1( t ) , . . . , R xi( t ) K( t ) ⇤ • Row vectors can be stacked to form design matrix • Parameter of interest is vector of basis coefficients • Integration is necessarily numeric
non-Gaussian can be handled through a similar process • Construct design matrix as above • Fit exponential family regression E [ yi | xi( t )] = µi g ( µi) = 0 + Z xi( t ) 1( t ) , . . . , Z xi( t ) K( t )
the case of one predictor • Expand coefficient using basis • Construct row vector of integrals; focus on basis coefficients • Domain may vary across predictors • Goal is to estimate collection of basis coefficients • Each vector has length K • For many predictors xil (t), variable selection may be necessary yi = 0 + p X l=1 Z xil( t ) l( t ) dt + ✏i [ T 1 , . . . , T p ]T
the goal is l(t) = 0 8t • Achieved by setting all relevant basis coefficients to zero • Group variable selection accomplishes this • Construct design matrix by concatenating vectors within a subject and stacking rows • Define grouping by functional coefficient • Estimate using favorite approach (e.g. group SCAD)
scale predictors to have mean 0 and variance 1 • Ensures that coefficients for predictors with high variability aren’t over penalized • In this setting, can be done pointwise over t
smooth the predictors, but in the case of noisy, sparse, or incomplete predictors this may be necessary • Not a problem to do routinely, either • In this case, the integration is between the smoothed predictor evaluated over a reasonable domain
constraints on estimated coefficients • Such constraints often take the form of a penalty l Z [ l(t)00]2dt • Can be expressed in terms of a ridge penalty on the basis coefficients • Here, this would require the use of composite penalties and additional computational burden