A local notion of similarity agrees with the following thesis:
Two nodes are similar to each other if they share many neighbors.
In case of directed graphs, we may replace neighbors with either predecessor or successor nodes. For instance, is a friendship network like Facebook, two people are similar if they have many friends in common. In a directed social network like Twitter, two users are similar if they share many followers or many followees.
Let $A = (a_{i,j})$ be the adjacency matrix of a possibly weighted graph. The general idea is to associate each node $i$ with a $i$th row of the adjacency matrix. If the graph is directed, we can associate node $i$ with either the $i$-th adjacency row if we focus on out-going links or with the $i$-th adjacency column if we focus on in-coming links. Hence a node is associated with a vector in $\mathcal{R}^n$, where $n$ is the number of nodes. The similarity (or dissimilarity) of two nodes $i$ and $j$ is then measured using a similarity (or distance) function among the associated vectors.
Let us focus on unweighted undirected graphs with $n$ nodes. We denote with $k_i = \sum_k A_{i,k}$ the degree of node $i$ and with $n_{i,j} = \sum_k A_{i,k} A_{j,k}$ the number of neighbors that nodes $i$ and $j$ have in common. Moreover, $A_i$ is the $i$-th row of $A$, a boolean (0-1) vector of length $n$.
We are going to describe a pool of similarity measures. The first one is cosine similarity: the similarity $\sigma_{i,j}$ of nodes $i$ and $j$ is the cosine of the angle between vectors $A_i$ and $A_j$: $$ \sigma_{i,j} = \frac{\sum_k A_{i,k} A_{j,k}}{\sqrt{\sum_k A_{i,k}^{2}} \sqrt{\sum_k A_{j,k}^{2}}} $$ The measure runs from 0 (orthogonal vectors or maximum dissimilarity) to 1 (parallel vectors or maximum similarity). Since the involved vectors are 0-1 vectors, we have that: $$ \sigma_{i,j} = \frac{n_{i,j}}{\sqrt{k_i k_j}} $$ That is, cosine similarity between $i$ and $j$ is the number of neighbors shared by $i$ and $j$ divided by the geometric mean of their degrees.
An alternative similarity measure in Pearson correlation coefficient: the similarity $\sigma_{i,j}$ of nodes $i$ and $j$ is the correlation coefficient between vectors $A_i$ and $A_j$: $$ \sigma_{i,j} = \frac{cov(A_i, A_j)}{sd(X_i) \, sd(X_j)} = \frac{\sum_k (A_{i,k} - \langle A_i \rangle) (A_{j,k} - \langle A_j \rangle)} {\sqrt{\sum_k (A_{i,k} - \langle A_i \rangle)^{2}} \sqrt{\sum_k (A_{j,k} - \langle A_j \rangle)^{2}}} $$ The measure runs from -1 (maximum negative correlation or maximum dissimilarity) to 1 (maximum positive correlation or maximum similarity). Notice that values close to $0$ indicate no correlation, hence neither similarity nor dissimilarity. Again, since the involved vectors are 0-1 vectors, it is not difficult to see that the numerator of the correlation coefficient, that is the co-variance between vectors $A_i$ and $A_j$ is: $$ cov(A_i, A_j) = n_{i,j} - \frac{k_i k_j}{n} $$ Notice that $k_i k_j/n$ is the expected number of common nodes between $i$ and $j$ if they would choose their neighbors at random. Hence a positive covariance, or a similarity between $i$ and $j$, holds when $i$ and $j$ share more neighbors than we would expect by change, while a negative covariance, or a dissimilarity between $i$ and $j$ happens when $i$ and $j$ have less neighbors in common than we would expect by change.
One could also normalize the actual number $n_{i,j}$ of common neighbors by dividing (rather than subtracting) the expected value of such number: $$ \sigma_{i,j} = \frac{n_{i,j}}{k_i k_j / n} = n \frac{\sum_k A_{i,k} A_{j,k}}{\sum_k A_{i,k} \sum_k A_{j,k}} $$ In this case the tipping point is the value $1$: lower values indicate dissimilarity (less communality than expected), while higher values show similarity (more communality than expected).
A measure of dissimilarity is Euclidean distance: the dissimilarity $\delta_{i,j}$ of nodes $i$ and $j$ is the Euclidean distance between vectors $A_i$ and $A_j$: $$ \delta_{i,j} = \sum_k (A_{i,k} - A_{j,k})^2 $$ That is, in case of Boolean vectors, the Euclidean dissimilarity between $i$ and $j$ is the number of neighbors that $i$ and $j$ do not share. By normalizing by the maximum such number (the sum of the nodes' degrees), we have a relative measure: $$ \delta_{i,j} = \frac{\sum_k (A_{i,k} - A_{j,k})^2}{k_i + k_j} $$ It runs between $0$ (all neighbors in common, maximum similarity) and $1$ (no neighbors in common, maximum dissimilarity). It turns out that $\delta_{i,j} = 1 - 2 n_{i,j} / (k_i + k_j)$ hence Euclidean similarity is still another normalization of number of common neighbors.