Skip to main content

What is Probabilistic Programming

What is Probabilistic Programming?

Probabilistic Programming(PP) is also a relatively unknown topic for many data scientists. However, this is an area that quickly becomes more important.

PP is a paradigm of programming that specifies probabilistic patterns and automatically deducts these patterns. It aims to bring together probabilistic modelling and traditional programming of the general-purpose to enhance and facilitate the latter. It can be used to build systems that assist decision-making in the presence of difficulty.

In this blog, I briefly discuss the specific situation of the PP. PP can be considered a tool for statistical modelling. 

Randomization is the essential element of the PP and the objective of the PP is to give a statistical analysis that explains a phenomenon.

Probabilistic programming language (PPL) is built on some elements, that; we have a number of fundamental elements to generate random numbers, primitive ones to estimate probabilities & expectations and ultimately primitive ones to deduct the probabilistic inference

A PPL works a little differently than regular languages. The prior distributions are considered in the model as assumptions. The subsequent distribution of the model parameters is calculated on the basis of observed data, i.e. Inference is used to alter previous probabilities on the basis of observed data.

In PP, latent random variables enable us to model uncertainty. In this example, however, the statistical inference consists of determining the values of the latent variables.



Conceptually, Deep PPLs can express Bayesian neural networks with probabilistic weights and biases. Practically speaking, Deep PPLs have materialized as new probabilistic languages and libraries that integrate seamlessly with popular deep learning frameworks.

But now the question is this, how do you use it?

The packages like pymc3 could be the one solution by implementing the  Bayesian Probabilistic Graphical models.  

One way of combining deep learning with PPLs is through Deep PPLs implemented the packages of the like Tensorflow Probability (TFP). Note that, TFP is a library for probabilistic reasoning and statistical analysis. TFP is a TensorFlow-based Python package that keeps things simple to integrate probabilistic models and deep learning on modern hardware (TPU, GPU). 

Note: Probabilistic programming does not involve writing software that behaves randomly.

References:


  1. https://medium.com/swlh/a-gentle-introduction-to-probabilistic-programming-languages-bf1e19042ab6
  2. https://www.math.ucdavis.edu/~gravner/MAT135B/materials/ch11.pdf
  3. https://www.cs.cornell.edu/courses/cs4110/2016fa/lectures/lecture33.html
  4. https://www.tensorflow.org/probability


Thanks for reading this blog. 


 




Comments

Popular posts from this blog

SQL 5 Minutes Read

SQL CHEAT SHEET (5 Minutes Read of Structured Query Language (SQL)) SQL (Structured Query Language) is a standardised programming language used to manage relational databases and execute various operations on related data. SQL, which was developed in the 1970s, is now widely used not only by database administrators but also by developers building data integration scripts and data analysts wanting to set up and perform analytical queries.  SQL is used to modify database table and index structures, add, update, and delete rows of data, and retrieve subsets of information from inside a database for transaction processing and analytics applications. Queries and other SQL operations take the form of statements, which are regularly used instructions. Select, add, insert, update, delete, create, change, and truncate are all SQL statements. In this blog, we will learn how to perform basic operations in SQL. Get function for inserting data, update data, deleting data, grouping data, order data,

UPSC Previous year paper for AD Census Operations (Technical) & Statistical Officer (Planning/Statistics)

UPSC Previous year paper for AD Census Operations (Technical) & Statistical Officer (Planning/Statistics) Syllabus of the Test:  (1) Statistical Methods (2) Sampling Techniques/Survey Methodology (3) Demography and Vital Statistics (4) Fundamentals of Applied Multivariate Analysis (5) Official Statistics (6) Basic knowledge in Computer Applications UPSC 2017 Paper   For PDF file, click on the below link: https://drive.google.com/file/d/14oDfPYGI9bA2g0m-0OmfexHS8LOgaf3j/view?usp=sharing (There are 25 slides) For PDF file, click on the below link: https://drive.google.com/file/d/14oDfPYGI9bA2g0m-0OmfexHS8LOgaf3j/view?usp=sharing Any comments or suggestions are welcome. Subscribe our Youtube Channel----->  Thanks for reading this blog.   

What is Hypothesis

 What is a Hypothesis in this Hypothetical World? Hypothesis: A hypothesis is an explanation for an observed problem or phenomenon based on previous knowledge or observations. Often called a research question, a hypothesis is basically an idea that must be put to the test.       Research questions should lead to clear, testable predictions. The more specific these predictions are, the easier it is to reduce the number of ways in which the results could be explained. Some problems require a fair amount of information and knowledge before one can formulate useful hypotheses, particularly if the problems are complex in nature. Characteristics of hypothesis: Hypothesis must possess the following characteristics: Hypothesis should be clear and precise. If the hypothesis is not clear and precise, the inferences drawn on its basis cannot be taken as reliable. Hypothesis should be capable of being tested. In a swamp of untestable hypotheses,  many a time the research programmes have bogged dow