Skip to content

Applying Linear Regression Theory Manually in TensorFlow

Weekly deep learning exploration delving into probabilistic methods. This series aims to expand conventional deep learning models, focusing on quantifying uncertainty, or establishing what these models are unsure about. We leverage TensorFlow and TensorFlow Probability, a Python library based on...

Linear Regression Implementation Using TensorFlow from the Ground Up
Linear Regression Implementation Using TensorFlow from the Ground Up

Applying Linear Regression Theory Manually in TensorFlow

In the world of deep learning, uncertainty is a crucial aspect that is often overlooked. However, a new approach has emerged that addresses this issue: Probabilistic Linear Regression. This article is part of the "Probabilistic Deep Learning" series, and it focuses on extending deep learning models to quantify uncertainty.

The new model, implemented using TensorFlow and TensorFlow Probability, is able to correctly capture the mean of the Gaussian noise and the standard deviation value. This allows for the construction of confidence intervals, a significant improvement in predictive accuracy.

To implement this probabilistic linear regression, you define a probabilistic model where the output is a distribution, such as a Normal distribution, parameterized by the linear regression output. Specifically, TensorFlow Probability layers or distributions are used to model the predictive distribution, capturing uncertainty in the predictions.

Here's a simplified example of how to implement this in TensorFlow 2.x:

```python import tensorflow as tf import tensorflow_probability as tfp

tfd = tfp.distributions tfpl = tfp.layers

inputs = tf.keras.Input(shape=(num_features,)) linear_output = tf.keras.layers.Dense(1)(inputs)

stddev = tf.keras.layers.Dense(1, activation=tf.nn.softplus)(inputs)

output_dist = tfpl.DistributionLambda( lambda t: tfd.Normal(loc=t[0], scale=t[1]))([linear_output, stddev])

model = tf.keras.Model(inputs=inputs, outputs=output_dist)

def nll(y_true, y_pred): return -y_pred.log_prob(y_true)

model.compile(optimizer='adam', loss=nll)

```

This approach models uncertainty by having the model output a distribution over the targets instead of a point estimate, capturing predictive uncertainty inherent in probabilistic regression.

In the near future, this approach will be extended to non-linear data, further expanding the capabilities of probabilistic deep learning.

The data used in this article was artificially created from a linear equation with independent and identically distributed normal noise. The left plot shows that one of the main advantages of using a probabilistic model is the ability to generate samples that follow the same generative process as the observations.

The code for this article is available on the author's GitHub, allowing you to experiment with probabilistic linear regression yourself.

Technology plays a significant role in implementing the new approach of probabilistic linear regression, as TensorFlow and TensorFlow Probability are utilized to define and train the model. This technology enables the correct capture of the mean and standard deviation values, leading to the construction of confidence intervals and improved predictive accuracy.

The use of technology also allows for the modeling of uncertainty in the predictions by having the model output a distribution over targets, inherent in probabilistic regression. In the future, this technology will be extended to handle non-linear data, expanding the capabilities of probabilistic deep learning.

Read also:

    Latest