A deep transfer learning-based approaches for the detection and classification of acute lymphocytic leukemia using microscopic images

Traditionally, researchers, doctors, and hematologists have faced difficulties in making a timely diagnosis of leukemia. Since, unlike other malignancies, leukemia typically does not create a (tumor) that can be seen in imaging techniques like X-rays or CT scans. As a result the procedures available at medical diagnosis centres of leukemia consumes a lot of time. Influenced by the capabilities of artificial intelligence techniques diagnosing the disease, the paper introduces advanced CNN models which includes InceptionV3, DenseNet201, Xception, ResNet152V2, and two hybrid models, i.e., InceptionResNetV2 and XceptionInceptionResNetV2 for identifying and classifying normal and leukemia cancer cells. Accuracy, root mean square error, recall, precision, F1 score, and loss are used to evaluate all applied advanced learning models. The Acute lymphoblastic leukemia dataset, which is separated into two classes: normal cells and malignant leukemia cells, is used in this study. During the pre-processing stage, every image undergo enhancement and are visually shown to fetch the color channels in the shape of an RGB histogram. Later the images are augmented to produce regions of interest by generating extreme points and employing adaptive thresholding techniques before being provided to the applied models for training. During the experimentation, it was discovered that InceptionResNetV2 had the highest validation accuracy of 98.59%. Similarly, DenseNet201 had the highest precision (97.57%), followed by InceptionV3 with recall (95.77%) and F1 score (95.56%). Moreover, the confusion matrix has also been generated to obtain the models’ recall, precision, and F1 score values for different classes of the dataset.

Telugu language hate speech detection using deep learning transformer models: Corpus generation and evaluation

In today’s digital era, social media has become a new tool for communication and sharing information, with the availability of high-speed internet it tends to reach the masses much faster. Lack of regulations and ethics have made advancement in the proliferation of abusive language and hate speech has become a growing concern on social media platforms in the form of posts, replies, and comments towards individuals, groups, religions, and communities. However, the process of classification of hate speech manually on online platforms is cumbersome and impractical due to the excessive amount of data being generated. Therefore, it is crucial to automatically filter online content to identify and eliminate hate speech from social media. Widely spoken resource-rich languages like English have driven the research and achieved the desired result due to the accessibility of large corpora, annotated datasets, and tools. Resource-constrained languages are not able to achieve the benefits of advancement due to a lack of data corpus and annotated datasets. India has diverse languages that change with demographics and languages that have limited data availability and semantic differences. Telugu is one of the low-resource Dravidian languages spoken in the southern part of India….

A Review of Deep Learning‑Based Approaches for Detection and Diagnosis of Diverse Classes of Drugs

Artificial intelligence-based drug discovery has gained attention lately since it drastically cuts the time and money needed to produce new treatments. In recent years, a vast quantity of data in various formats has been made accessible in the medical field to analyse different health complications. Drug discovery aims to uncover possible novel medications using a multidisciplinary approach that includes biology, chemistry, and pharmacology. Traditional sentiment analysis methods count or repeat words in a text assigned sentiment ratings by an expert. Several outdated, ineffective old methodologies are utilized to forecast drug design and discovery. However, with the development of DL (deep learning), the traditional drug discovery method has been further simplified. In this work, we applied deep learning models, such as LSTM (Long short-term memory), GRU (Gated recurrent units), Bidirectional LSTM (BiLSTM), Bidirectional GRU(BiGRU), SimpleRNN, embedding+LSTM, embedding+GRU, embedding+GRU+dropout, embedding+conv1d+LSTM, and Embedding+Conv1d+GRU on a dataset of drug reviews. Furthermore, we used Adam and RMSprop, two optimizers, for each model, for increased optimization. This research focuses on categorizing medication reviews into positive and negative categories. The effectiveness of the different deep learning models was assessed using a wide range of performance measures. Experiments demonstrated that the GRU (Gated Recurrent Unit) generated exceptional validation dataset results. In addition, this study emphasizes the relevance of deep learning methods over traditional learning approaches in categorization.

Optimization and Prediction of Karanja oil transesterification with domestic microwave by RSM and ANN

The optimization and transesterification of soybean oil with methanol in the presence of sodium hydroxide as a catalyst was investigated. A low-temperature transesterification process was selected to make the transesterification process more energy efficient. To further improve the production of biodiesel, the experimental design was carried out with the Box-Behnken method. The results were analysed using the response surface methodology. A model was developed to correlate the performance of biodiesel with the parameters of the process, such as the molar ratio, the concentration of the catalyst and the reaction time. The influence of the reaction variables, including; The molar ratio of oil (6: 1–12: 1), temperature (50° C) and catalyst concentration (1–2% by weight) and residence time (30–60 minutes) on the transesterification reaction of the methyl ester of Fatty acid (FAME) were studied. A biodiesel yield of 80.86% with the molar ratio (8:1) was reached using NaOH as catalyst (1.8) in 34 minutes at a temperature of 50° C. It was observed that the catalyst concentration, the reaction time and the molar ratio had a significant effect on the yield of soybean biodiesel.

Optimized Cost Model with Optimal Disk Usage for Cloud

M. Aggrawal, N. Kumar, and R. Kumar, “Optimized Cost Model with Optimal Disk Usage for Cloud,” Big Data Anal. Adv. Syst. Comput., pp. 481–485, 2018.

Cloud is a bag full of resources. Using cloud services at an optimal level is required as now cloud is primary technology for deployment over Internet. This is indeed a practice to make use of things efficiently to make cloud a better place. Cloud is providing all computing resources that one may need to compile tasks, but efficiently using of resources can increase the power to accommodate more con- sumers and also consumer can save on cost for the services subscribed. This paper provides a mechanism to increase or decrease the subscription as per the use.

Scheduling of Tasks (CLOUDLETS) in Hetrogeneous Processing Cloud Environment

Scheduling in cloud environment is a big challenge, it has two flavors in cloud environment one to schedule the placement of the virtual machines (VM) and second is the placement of cloudlet or tasks in the right virtual machine for the fast execution. In first type of scheduling to save the energy in the DataCenter it is always a good idea to re-arrange the running VM’s on the underlying physical machines, so the underload physical machine can go to sleep to save energy. So, the assignment of the VM from the underloaded physical machine to other is a challenge. In second the placement of cloudlet to the VM for execution, the decision to VM is crucial. In DataCenter, the underlying environment is heterogeneous, it is a challenge to use all the VM which are now old and new high-end specs VM. So, balancing the tasks assignment is challenging. In the proposed work the placement of the tasks is taken up, the tasks are picked- up for execution on the FCFS basis and our algorithm assigns all the tasks to the VM which is providing the minimum execution time. It calculates the time of execution from all the VM available and calculates the time to finish the previous assigned or under execution tasks to find the minimum execution time. We will see the algorithm working in load scenarios.