Recent advances in sensing, electronic, processing, machine learning, and communication technologies are accelerating the development of assisted and automated functions for commercial vehicles. Environmental perception sensor data are processed to generate a correct and complete situational awareness. It is of utmost importance to assess the robustness of the sensor data pipeline, particularly in the case of data degradation in a noisy and variable environment. Sensor data reduction and compression techniques are key for higher levels of driving automation, as there is an expectation that traditional automotive vehicle wired networks will not be able to support the needed sensor datarates (i.e. more than 10 perception sensors, including cameras, LiDARs, and RADARs, generating tens of Gb/s of data). This work proposes for the first time to consider video compression for camera data transmission on vehicle wired networks in the presence of highly noisy data, e.g. partially obstructed camera field of view. The effects are discussed in terms of machine learning vehicle detection accuracy drop, and also visualising how detection performance spatially varies on the frames using the recently introduced metric, the Spatial Recall Index. The presented parametric occlusion noise model is generated to emulate real-world occlusion patterns, whereas compression is based on the well-established AVC/H.264 compression standard. The results demonstrate that the DNN performance are stable when increasing compression despite adding small amounts of noise. However, higher levels of occlusion noise have a higher impact on DNN performance, and when combined with compression, there is a significant decrease in the DNN performance.