Histograms are a very useful statistical tool widely used to infer the underlying probability density functions (pdf) in data analysis. For a given data sample, a constant-bin-size histogram in one variable has a single free parameter that is the bin size. The main goal in the election of the bin size is that the relevant features of the pdf are not lost in the bin integration but, on the other hand, spurious features due to statistical fluctuations are integrated out as much as possible. Several algorithms have been proposed to decide on the right bin size for a given data sample. In this talk, we'll focus on an algorithm based upon the use of a likelihood method that should provide the optimal choice of the bin size.