loading page

Autoencoder Based Image Quality Metric for Modelling Semantic Noise in Semantic Communications
  • +2
  • Prabhath Samarathunga,
  • Thanuj Fernando,
  • Vishnu Gowrisetty,
  • Thisarani Atulugama,
  • Prof. Anil Fernando
Prabhath Samarathunga
University of Strathclyde
Author Profile
Thanuj Fernando
University of Strathclyde
Author Profile
Vishnu Gowrisetty
University of Strathclyde
Author Profile
Thisarani Atulugama
University of Strathclyde
Author Profile
Prof. Anil Fernando
University of Strathclyde

Corresponding Author:[email protected]

Author Profile

Abstract

Semantic communication has attracted significant attention as a key technology for emerging 6G communications. Though it has lots of potentials specially for high volume media communications, still there is no proper quality metric for modelling the semantic noise in semantic communications. This paper proposes an autoencoder based image quality metric to quantify the semantic noise. An autoencoder is initially trained with the reference image to generate the encoder decoder model and calculate its latent vector space. Once it is trained, a semantically generated/received image is inserted to the same autoencoder to create the corresponding latent vector space. Finally, both vector spaces are used to define the Euclidean space between two spaces to calculate the Mean Square Error between two vector spaces, which is used to measure the effectiveness of the semantically generated image. Results indicate that the proposed model has a correlation coefficient of 88% with the subjective quality assessment. Furthermore, the proposed model is tested as a metric to evaluate the image quality in conventional image coding. Results indicate that the proposed model can also be used to replace conventional image quality metrics such as PSNR,SSIM,MSSIM,UQI, VIFP, and SSC whereas these conventional metrics completely failed in semantic noise modelling.
10 Nov 2023Submitted to Electronics Letters
13 Nov 2023Submission Checks Completed
13 Nov 2023Assigned to Editor
22 Nov 2023Reviewer(s) Assigned
24 Jan 2024Review(s) Completed, Editorial Evaluation Pending
25 Jan 2024Editorial Decision: Accept