Skip to content

Commit 66567a9

Browse files
authored
Merge branch 'main' into main
2 parents e4ea214 + 113753c commit 66567a9

15 files changed

+43664
-1
lines changed

README.md

Lines changed: 99 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,101 @@
11
# Inductions-22
2-
##### Fork the repo and create a pull request of your solution to this repository for the corresponding tasks
2+
3+
##### Fork the repo and create a pull request of your solution to this repository for the corresponding tasks
4+
35
##### Note : Do not send pull requests to the main repo. Make sure you are sending requests inside your tasks folder
6+
Sentimental
7+
8+
> medium link:
9+
> [https://medium.com/@advaith142001/farmers-protest-twitter-sentiment-analysis-a8aca1e52f43](https://medium.com/@advaith142001/farmers-protest-twitter-sentiment-analysis-a8aca1e52f43)
10+
=======
11+
Parameter :
12+
A variable that is internal to the the model and whose value can be estimated from data.
13+
-They are required by the model when making predictions
14+
-They are often saved as part of the learned model
15+
16+
Hyperparameter :
17+
A variable that is external to the the model and whose value cannot be estimated from data.
18+
-They are often used in processes to help estimate model parameters.
19+
-They are often specified by the practitioner.
20+
21+
Gaussian process:
22+
It's a powerful algorithm for both regression and classification
23+
-Gaussian process is a probaility distribution over possible functions
24+
Kernal :
25+
The method of classifying linearly for the non-linear problems
26+
27+
Surrogate method :
28+
A statistical model to accurately approximate the simulation output
29+
30+
Probablistic model :
31+
Probabilistic modeling is a statistical approach that uses the effect of random occurrences or actions to forecast the possibility of future results
32+
-it provides a comprehensive understanding of the uncertainty associated with predictions.
33+
-Using this method, we can quickly determine how confident any mobile learning model is and how accurate its prediction is.
34+
35+
Nomenclatures:
36+
37+
1. suurogate model (gaussian function in this case)
38+
It is the statistical/probabilistic modelling of the “blackbox” function.
39+
It works as a proxy to the later. For experimenting with different parameters
40+
This model is used to simulate function output instead of calling the actual costly function
41+
42+
2. Acquisition Function
43+
It is a metric function which decides which parameter value that can return the optimal value from the function.
44+
There are many variations of it. We will work with the one “Expected Improvement”
45+
46+
47+
48+
Problem statement
49+
To summarize a research paper which talks about efficiency and implementation of bayesian optimaization
50+
51+
52+
Pseudo code for Bayesian optimization
53+
54+
SURROGATE FUNCTION (Gaussian process)
55+
step1 Looping over all the samples values of input x, where the evaluatation takes place .
56+
2. Building k and f vectors i.e the data
57+
3. Building matrices X and Y
58+
4. Calculating mu and sigma.
59+
5. Appending mu to predictedMu array and sigma to predictedSigma array
60+
step6 Calculation of Omega as the mean of blackbox function for sampled points
61+
step7 Calculation of Kappa =PredictedMu + Omega
62+
step8 Returning values
63+
Kappa(estimated mean of suurogate func.) and predictedSigma (estimated variance of surrogate func.)
64+
I have used sklearn module to import gaussian Process in my model
65+
ACQUISITION FUNCTION
66+
Usually acquisition functiopn consist of :
67+
1.Upper confidence bound
68+
2.Lower confidence bound
69+
3.Probability of imbprovement
70+
4.Expected Improvement
71+
72+
73+
Mathematical inpretation
74+
75+
let us take the actual function be f(x)
76+
bayesian function be y= f(x) + e(Etta) where e is small value to optimize the return value
77+
instead y can be represenated as gaussian distribution of (f(x), variance)
78+
GP is completely specified by its mean function mu(x) and its covariance k(x,x')
79+
80+
Loss function can be representated by gaussian distribution of its mean and covariance as mentioned above
81+
82+
Coming to acquisition function
83+
84+
expected improvement EI(x)= E[max {0, f(x)-f(x")
85+
where x" isthe current potimal set of hyperparameters. Maximizingthis parameter will give
86+
us the point that improves upon function the most
87+
88+
EI(x)= mu (x)- f(x"))psi(Z) + sigma (x)Pi(Z) if sigma (x) >0
89+
= 0 if sigma (x) =0
90+
therffore,
91+
Z= (mu (x)- f(x") )/sigma (x)
92+
93+
here, Psi (x) is cumulative function and pi(z) is probability density
94+
95+
96+
Final points,
97+
1.Given observed values f(x), update the posterior expectation of f using the GP model.
98+
2.Find xnew that maximises the EI: xnew=argmaxEI(x).
99+
3.Compute the value of f for the point xnew.
100+
101+
by iterating for different values we can make a perfect model or function which suits the actual functionmain

0 commit comments

Comments
 (0)