Results for "low-rank adaptation"
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Low-latency prediction per request.
Ultra-low-latency algorithmic trading.
Number of linearly independent rows or columns.