Exploring different LoRA variants for efficient LLM Fine-Tuning

Gautam Chutani
9 min readJun 14, 2024

--

In the world of Generative AI, fine-tuning large models can be resource-intensive and costly. To address these challenges, researchers have developed various techniques to make this process more efficient. One such technique is Low-Rank Adaptation (LoRA). This article explores LoRA and its advanced variants, highlighting their unique features and benefits. Here are the different techniques that we’ll cover:

LoRA: Low-Rank Adaptation

LoRA is a method designed to make the fine-tuning of large models more efficient. Instead of adjusting the entire weight matrix W of a pre-trained model, LoRA introduces two smaller, low-rank matrices, A and B, alongside the original weight matrices. These smaller matrices contain the trainable parameters, reducing the number of parameters that need to be updated during training. By focusing on these smaller matrices, LoRA significantly cuts down on the computational resources required for fine-tuning, making the process faster and more cost-effective.

Image Source

LoRA+: Optimized Learning Rates

LoRA+ enhances the basic LoRA approach by using different learning rates for matrices A and B. In standard LoRA, both matrices are updated at the same learning rate. However, the authors of LoRA+ proposed setting a significantly higher learning rate for matrix B as compared to the other weight matrix A. This adjustment ensures better alignment of the model with the fine-tuning task, leading to improved performance, particularly for complex tasks. This argument was based on insights into how neural networks initialize, especially in models with many neurons. The key difference between standard LoRA and LoRA+ is the learning rate ratio, λ. This optimized learning rate strategy allows for more efficient fine-tuning and faster convergence. For detailed guidelines on selecting the optimal λ value for specific tasks and models, refer to the original research paper here.

Image Source

LoRA-FA: Freezing Part of the Adaptation

While LoRA reduces the total number of trainable parameters, it still demands a considerable amount of memory to update the low-rank weights. LoRA-FA (Frozen-A) addresses this issue by freezing matrix A and only updating matrix B during training. By preserving matrix A’s initial state and concentrating updates solely on matrix B (the projection-up weight), it ensures that adjustments to model weights occur within a low-rank space. Thus, LoRA-FA further reduces the memory requirements, while achieving performance comparable to standard LoRA.

Image Source

DyLoRA: Dynamic Low-Rank Adaptation

DyLoRA trains LoRA blocks for a range of ranks instead of a single rank by sorting the representation learned by the adapter module at different ranks during training. This method allows for dynamic rank selection during inference without additional costs. In each iteration, a sample from a pre-defined random distribution helps truncate the up-projection and down-projection matrices in the LoRA objective. By making minor compromises in performance, it avoids the need for costly searches to find the optimal rank, thus significantly reducing training time. DyLoRA maintains performance across a broader range of ranks compared to LoRA, making it a flexible and efficient alternative.

Image Source

DP-DyLoRA: Differentially Private Dynamic Low-Rank Adaptation

DP-DyLoRA extends the DyLoRA method by incorporating differential privacy, ensuring data privacy in federated learning environments. DyLoRA enhances model flexibility and performance by allowing the rank of adaptation matrices to vary during training, eliminating the need for extensive retraining. However, when naively combined with federated learning, this approach could compromise differential privacy. DP-DyLoRA addresses this by integrating privacy-preserving mechanisms, ensuring that sensitive information remains secure while maintaining the dynamic rank adaptation of DyLoRA. This combination allows for efficient, private, and adaptable model fine-tuning, making DP-DyLoRA particularly suitable for privacy-sensitive applications like IoT systems.

Image Source

AdaLoRA: Adaptive Low-Rank Adaptation

AdaLoRA enhances fine-tuning for large language models (LLMs) by adaptively allocating parameter budgets based on the importance of weight matrices and layers. Unlike standard LoRA, which evenly distributes parameters, AdaLoRA adjusts the rank of the LoRA matrices by considering their singular values, which indicate their significance. By freezing less important parameters and updating only the crucial ones, it reduces memory usage without compromising performance. Using singular value decomposition, it identifies and prunes unimportant updates, ensuring that more parameters are dedicated to the most impactful parts of the model — i.e, a higher rank r — for important weight matrices that are better adapted for fine-tuning task. This adaptive approach allows AdaLoRA to achieve superior results across various models and tasks while maintaining a balanced parameter budget.

Image Source — Here x-axis is the layer index and y-axis represents different types of adapted weight matrices.

Delta-LoRA: Adding Delta Updates

Delta-LoRA builds on the basic LoRA technique by also adjusting the original weight matrix, W, but in a unique way. Instead of updating W directly, Delta-LoRA calculates the difference (or delta) between the product of the low-rank matrices A and B in two consecutive training steps and adds this delta to W. Basically, it utilizes the delta obtained from the product A*B to update the pre-trained weights W, thereby avoiding the necessity of storing both the first and second-order momentums within the optimizer. This method allows for more nuanced updates to the weight matrix, potentially leading to better fine-tuning results.

Image Source

DoRA: Decomposing Weight Adaptation

Weight-Decomposed Low-Rank Adaptation (DoRA) starts with the idea that each matrix can be decomposed into the product of magnitude and direction. By decomposing the pre-trained weight into these two components, DoRA separates the fine-tuning process for each, allowing for more precise adjustments. The direction matrix is enhanced using LoRA’s method, while the magnitude vector remains fixed. This approach maintains efficiency and reduces the number of trainable parameters. Unlike LoRA, which tends to change both magnitude and direction together, DoRA independently adjusts them, leading to a training behavior that closely resembles full fine-tuning (FT). This separation enables DoRA to achieve better performance and stability without increasing inference overhead.

Image Source

VeRA: Shared Low-Rank Matrices Across Layers

VeRA (Vectorized Rank Adaptation) introduces a different approach to low-rank adaptation. In traditional LoRA, each layer of the model has its unique pair of low-rank matrices, A and B, which are both trained during fine-tuning. VeRA simplifies this by using the same pair of low-rank matrices across all layers of the model. These matrices are frozen and random, meaning they are not trained. Instead, VeRA learns small, layer-specific scaling vectors, denoted as b and d, which are the only trainable parameters. This approach drastically reduces the number of trainable parameters and further streamlines the fine-tuning process.

Image Source

LoHa: Low-Rank Hadamard Product

LoHa introduces a novel approach to fine-tuning by using element-wise Hadamard products instead of regular matrix operations. By breaking down weight updates (∆W) into four smaller matrices, LoHa enhances how models can express themselves while keeping the increase in trainable parameters minimal. Initially developed for computer vision tasks focused on generating diverse images, LoHa aims to expand its application to other model types, although it currently doesn’t include embedding layers in efficient fine-tuning (PEFT) setups.

LoKr: Low-Rank Kronecker Product

LoKr transforms how models are fine-tuned by using Kronecker products instead of standard matrix operations. Initially developed for image generation models and applicable to different types of models, LoKr maintains matrix rank through block structures. This approach speeds up parameter updates by stacking matrix columns, making it easier to adapt models for different tasks. LoKr represents a major step forward in fine-tuning techniques, ensuring efficient and powerful model adaptation without sacrificing performance.

Image Source

LoRA-drop: Efficiently Selecting Important Layers

LoRA-drop optimizes model fine-tuning by selectively applying LoRA adapters only to the most impactful layers. The method consists of two main steps. First, a subset of the data is used to train all LoRA adapters briefly. Then, the importance of each adapter is calculated based on the output produced by B*A*x, where A and B are the LoRA matrices and x is the input. This output indicates how much the adapter changes the layer’s behavior. If the output is significant, it means the adapter has a strong impact on the layer, so it is kept. If the output is small, the adapter has little influence and can be omitted or made to share parameters with other less important adapters. By focusing resources on the most important layers, LoRA-drop ensures efficient training with fewer parameters, leading to marginal changes in accuracy but reduced computation time.

Image Source

QLoRA: Quantized Low-Rank Adaptation

QLoRA (Quantized Low-Rank Adaptation) extends LoRA by quantizing the precision of weight parameters in pre-trained LLMs to 4-bit, significantly reducing memory usage and making it feasible to fine-tune these models on a single GPU. QLoRA introduces key innovations, including 4-bit NormalFloat (an optimal quantization data type for normally distributed data), double quantization (which compresses quantization constants), and paged optimizers (using NVIDIA unified memory to handle gradient checkpointing spikes). These enhancements allow QLoRA to maintain or exceed the performance of traditional LoRA and base models, while dramatically reducing memory requirements, with only a minimal impact on training speed due to the quantization and dequantization processes.

Image Source

Conclusion

The evolution of LoRA and its variants demonstrates the ongoing efforts to make fine-tuning large models more efficient and cost-effective. By introducing such techniques, researchers continue to push the boundaries of what’s possible in model adaptation. These advancements not only reduce the computational resources required but also improve the overall performance of fine-tuned models. As large language models continue to grow in size and complexity, these innovative approaches will play a crucial role in making them more accessible and practical for a wide range of applications.

References

--

--

Gautam Chutani
Gautam Chutani

Written by Gautam Chutani

Exploring the depths of AI, data science, and backend development while embracing a passion for cricket!