University of Scholars has started its journey in 2015 with five departments at the outset. The Trustee Board of the university aims to establish a research-intensive modern private university for Bangladeshi as well as international students which will produce world-class researchers and industry leading professionals. The university is working with the slogan- We Build Professionals, and following American curriculum with strong emphasis on the use of technology for teaching and learning process. Therefore, we are adopting the newest educational technologies (EdTech) to modernize our Learning Management System (LMS) for the best experience of learning as well as teaching. This is the first initiative of such kind in Bangladesh. Our congenial atmosphere and flexible policy inspire the faculty members who are highly qualified having academic degrees and professional training from home and abroad.
Effects of Label Noise on Performance of Remote Sensing and Deep Learning-Based Water Body Segmentation Models. Abstract: Large-scale management of surface water resources in urban areas can be difficult, especially if the region is subject to monsoonal waterlogging. Deep learning-based methods for computer vision tasks, such as image segmentation, can effectively be applied to remote sensing data for generating water body maps of large cities, aiding managerial entities, urban planners, and policymakers. The robustness of these models to erroneous pixel-level training class labels has not been studied in the water body segmentation context. Label noise is commonly experienced in classification tasks and may hinder performance. We collected and densely labeled Sentinel-2 images over Dhaka, one of the most densely populated and flood-prone megacities in the world. We synthetically injected four types of label noise viz., i) Gaussian noise, ii) translation, ii) rotation and iv) mirroring. Our primary objective is to observe and quantitatively analyze the effects of label noise on remote sensing data-driven deep learning models for water body segmentation. Our results show that salt and pepper noise (injected artificially using Gaussian noise) of only 50% can cause a massive 48.55% drop in the Intersection-over union score. The consequences of learning from training data with different magnitudes and settings of label noise have been explored. Details - (Link)
A deep learning classification model for Persian Hafez poetry based on the poet’s era. Abstract: More than any other literary genre, poetry presents a significant challenge for Natural Language Processing (NLP) algorithms. Small poetries in the Persian language are called ghazal. Ghazal classification by document embedding technique and sequential learning in the poetic era is an under-explored area of research till now. Deep learning anddocument embedding technique is explored in the current study. We have worked withPersian Ghazal, which Hafez writes. We have found and employed useful NLPapproaches to facilitate and automate the classification of Hafez’s poetry. We developedand implemented a set of rigorous and repeatable techniques that may be extended todifferent types of poetries. It is a part of Persian text classification and NLP. We haveimplemented neural network models that automatically classify Hafez’s Persian poetrychronologically with around 85% accuracy. This proposed model is significantly betterthan previously reported work in the Persian Language on poetry data. In the Persianlanguage, meter classification and machine learning-based poetry classification weredone before. We have introduced a classification method based on the poet’s era usingsequential architectures. We found the highest accuracy when we used the DistributedMemory model for document embedding and Long Short-Term Memory (LSTM) modelfor training the Persian Hafez ghazals. We have achieved approximately 87% precision,85% F1-score, and 85% recall score by using our model. To perform this classification,we have used refined Hafez ghazals’ labels and found better accuracy than BidirectionalLong Short-Term Memory (Bi-LSTM) and Gated Recurrent Units (GRU) models. Details - (Link)