1. Partition tolerance is not optional. It's a given - packets will
drop/communication errors are bound to happen between nodes.
2. So all you can choose is Availability or Consistency.
3. Choosing Consistency - You can stop accepting writes or only take
writes if the node is "Master" of the data to be written.
4. Choosing Availability - You can take all the writes but clients may
get "stale data".
2 more relevant metrices which better capture the performance
Yield & Harvest
Yield is similar to uptime but one major diff. If node is down for 1
second in peak/off-peak hours - uptime is same - but yield is vastly
different. Yield directly maps to what the user experienced. So Yield
= % of user requests served.
Harvest = available data/total data. If data lies on 3 nodes but
server was able to serve data from only 2 nodes => harvest = 66%
Now we need to decide whether faults impact yield or harvest.
Replicated systems tend to map faults to reduced yield - since fewer
requests will complete.
Partitioned systems will map faults to reduced harvest - since lesser
data will be available.
Friday, June 8, 2018
Tuesday, June 5, 2018
Tuesday, April 24, 2018
chmod a+x certbot-auto
sudo ./certbot-auto certonly --server https://acme-v02.api.letsencrypt.org/directory --manual --preferred-challenges dns -d *.domainname.com
For putting TXT record in NameCheap:
In HostName, put _acme-challenge, in value put the string given on the command line.
Then in httpd.conf:
ServerAlias *. domainname .com
SSLCertificateFile /etc/letsencrypt/live/ domainname .com-0001/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ domainname .com-0001/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/ domainname .com-0001/fullchain.pem
ErrorLog logs/ domainname -error_log
CustomLog logs/ domainname -access_log common
Options Indexes FollowSymLinks
Allow from all
Monday, April 23, 2018
Friday, March 23, 2018
Edge detection with convolutional filter.
Image is nxn, fxf filter (f is usually odd).
Valid convolution when you don't pad the original image which is nxn, with fxf filter you get n-f+1 x n-f+1 output image which tells where are the edges.
Same convolution when you pad the original image so that every pixel gets equal opportunity to participate in the final output => (n + 2p -f)/s + 1 = n
Strided convolution - when you do the convolution while making a stride. Output image size will be (n + 2p -f)/s + 1.
Convolutions over 3D volumes:
For e.g. RGB image.
Image is 6x6x3 and filter 3x3x3, then you get 4x4 output. First 9 numbers will detect edges in red channel and so on..
What if you want to use multiple filters at the same time? For e.g. detect Vertical/Horizontal edges together? Or detect edges at various angles?
In the above example if you apply 2 3x3x3 filters, you will get output as 4x4x2.
Which is n -f + 1 x n - f + 1 x (number of filters).
How to tune parameters
But for now, maybe one thing to take away from this is that as you go deeper in a neural network, typically you start off with larger images, 39 by 39. And then the height and width will stay the same for a while and gradually trend down as you go deeper in the neural network. It's gone from 39 to 37 to 17 to 14. Excuse me, it's gone from 39 to 37 to 17 to 7. Whereas the number of channels will generally increase. It's gone from 3 to 10 to 20 to 40, and you see this general trend in a lot of other convolutional neural networks as well.
Similar to convolutional layer, there is pooling layer:
for e.g. Max pooling - if a feature is detected anywhere - preserve it.
It has some hyperparameters but no parameters(to learn for gradient descent)
Hyperparameters -> f,s (filter size, stride)
Similarly, average pooling:
Tuesday, January 30, 2018
Public key encryption
Secure TDF - G,F,F-1 is secure if F(pk, .) is a "one-way" function: can be evaluated but can't be inverted without sk(secret key). pk is public key.
Secret key is the trapdoor.
Public key encryption from TDFs
1. Choose a random x
2. k <= H(x) where H is a hasher
3. y = F(pk, x) where G,F,F-1 is a secure TDF and pk,sk are generated from G
4. c <= E(k,m) where E,D is symmetric auth. encryption defined over (K,M,C)
5. Output is y,c
1. x <= F-1(sk,y)
2. k = H(x)
3. m = D(k,c)
If we apply F directly to m, it becomes deterministic. There is no randomness (which was provided by X).
The RSA Trapdoor permutation
Review: arithmetic mod composites
Let N = p.q where p,q are primes and roughly same size => p,q are almost equal to sqrt(N)
Z_N = (0,1,2...N-1) and Z_N* = set of invertible elements in Z_N
x E Z_N is invertible if gcd(x,N) = 1
number of invertible elements = phi(N) = (p-1)(q-1) = N -p -q +1 ~= N - 2.sqrt(N) ~= N since N is very large(for e.g. 600 digits, so sqrt will be like 300 digits)
So Z_N* ~= Z_N => almost every element in Z_N will be invertible.
Euler's thm For all x E Z_N * => x ^ phi(N) = 1
How RSA works
0. choose random primes p,q roughly 1024 bits, set N = p*q
1. choose e,d s.t. e*d = 1 mod phi(N)
2. pk = (N,e), sk = (N,d) where e is encryption exponent and d is decryption exponent
3. for x E Z_N*, F(pk,x) is RSA(x), RSA(x) = x^e in Z_N
4. to decrypt
5. RSA_1(y) = y^d = (RSA(x))^d = (x^e)^d = x^(e*d), now e*d = 1 mod phi(N) means e*d = k*phi(N) + 1 where k is some integer
6. RSA_1(y) = x^(k*phi(N) + 1) = x^(k*phi(N))*x, from Euler's thm. x^(phi(N)) = 1 since x E Z_N* => RSA_1(y) = x
Textbook RSA is insecure
Encrypt C = m^e
Decrypt C^d = m
Uses RSA. Insecure since attacker could check if MSB of a cipher text's original message == 2. And could decode the entire message in this way. It's used in HTTPS so they fixed it by reverting to a random 46 byte string in case of erroneous message, so that attacker doesn't get any information about the message.
PKCS2 - OAEP (Optimal Asymmetric Encryption Padding)
Improvement over PKCS1
Public key encryption built from Diffie Hellman Protocol
IDH - Interactive Diffie Hellman
- ▼ 2018 (10)
- ► 2017 (64)
- ► 2016 (67)
- ► 2013 (48)
- ► 2012 (59)
- ► 2011 (77)
- ► 2010 (147)
- ► 2009 (46)
- ► 2008 (73)