The answer depends on the content of your images. As there is no free lunch in lossless compression you cannot create a lossless compression algorithm which generally performs good on all input images. I.e. if you tune your compression algorithm so that it performs good on certain kind of images then there are always images where it must perform badly, meaning that it increases the filesize compared to the uncompressed representation. So you should have an idea of the image content that you are going to process.
The next question would be if you can afford lossy compression or if you require lossless compression.
In case of typical digital photos JPEG 2000 is a good candidate, as it supports both, lossy and lossless compression and is tuned for photo content.
For lossy compression there is also the very real possibility of advances in encoder technology, e.g. the recent alternative JPEG encoder Guetzli by Google, which makes better use of specifics in human visual perception to allocate more bits to features that make a difference in perception.
For images with big areas of the same color and sharp edges, as diagrams and graphs or stylized maps, PNG is a good match. PNG is a lossless file format, supports transparency and achieves good compression for b/w images.
Also wikipedia has a comparison of image file formats.
In the spirit of Kolmogorov complexity there might be images that can be compressed much further by finding an algorithm which generates the image but usually this applies only in special cases like fractals or simple raytraced CG images, not for typical digital photos.
Arbitrary (non-image) data
For general data Arithmetic coding is a good choice, as it can achieve nearly optimal compression (with respect to the occurance proportion of symbols in the data), when the alphabet for data representation suits the data. (E.g. a spectral representation of small chunks of typical music recordings is usually better suited for compression than a time series representation).