A binary variable with values 0, 1 can (usually) be scaled to (value - mean) / SD, which is presumably your z-score.
The most obvious constraint on that is that if you happen to get all zeros or all ones then plugging in SD blindly would mean that the z-score is undefined. There is a case for assigning zero too in so far as value - mean is identically zero. But many statistical things won't make much sense if a variable is really a constant. More generally, however, if the SD is small, there is more risk that scores are unstable and/or not well determined.
A problem over giving a better answer to your question is precisely what "machine learning algorithm" you are considering. It sounds as if it's an algorithm that combines data for several variables, and so it usually will make sense to supply them on similar scales.
(LATER) As the original poster adds comments one by one, their question is morphing. I still consider that (value - mean) / SD makes sense (i.e. is not nonsensical) for binary variables so long as the SD is positive. However, logistic regression was later named as the application and for this there is no theoretical or practical gain (and indeed some loss of simplicity) to anything other than feeding in binary variables as 0, 1. Your software should be able to cope well with that; if not, abandon that software in favour of a program that can. In terms of the title question: can, yes; should, no.