The provided test code serializes an object to a ByteArrayOutputStream, converts the generated byte array into a string using the ByteArrayOutputStream.toString() method, converts the string back into a byte array using the String.getBytes() method, and then attempts to deserialize the object from the byte array using a ByteArrayInputStream. This procedure will in most cases fail because of the transformations that take place within ByteArrayOutputStream.toString() and String.getBytes(): in order to convert the contained sequence of bytes into a string, ByteArrayOutputStream.toString() decodes the bytes according to the default charset in effect; similarly, in order to convert the string back into a sequence of bytes, String.getBytes() encodes the characters according to the default charset. Converting bytes into characters and back again according to a given charset is generally not an identity-preserving operation. As the javadoc for the String(byte, int, int) constructor (which is called by ByteArrayOutputStream.toString()) states, "the behavior ... when the given bytes are not valid in the default charset is unspecified". In the test case provided, the first two bytes of the serialization stream, 0xac and 0xed (see java.io.ObjectStreamConstants.STREAM_MAGIC), both get mapped to the character '?' since they are not valid in the default charset (ISO646-US in the JDK I'm running). The two '?' characters are then mapped back to the byte sequence 0x3f 0x3f in the reconstructed data stream, which do not constitute a valid header. The solution, from the perspective of the test case, is to use ByteArrayOutputStream.toByteArray() instead of toString(), which will yield the raw byte sequence; this can then be fed directly to the ByteArrayInputStream(byte) constructor.