Trending February 2024 # Antecedents To The Use Of Persuasion Knowledge # Suggested March 2024 # Top 5 Popular

You are reading the article Antecedents To The Use Of Persuasion Knowledge updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Antecedents To The Use Of Persuasion Knowledge

Several research has looked into the moments during a particular persuasion event when consumers are more or less likely to use their understanding of persuasion. The degree to which persuasive knowledge is applied is influenced by cognitive ability, motive openness, and persuasion expertise. Consumers are generally more likely to utilize persuasive skills when they are highly motivated, can explain what marketers are trying to do, and have the chance to do so. This meshes well with other consumer behavior ideas. Research possibilities are available to uncover and investigate additional aspects promoting or inhibiting persuasive knowledge. There may be further unresearched antecedents of using persuasive knowledge.

Persuasion Knowledge

Understanding when individuals have a greater or lesser inclination to apply their persuasion expertise is necessary for a more thorough development of the Persuasion Knowledge Model (PKM). The PKM does not specify the circumstances that elicit or repress persuasion knowledge, although discussing some potential problems with persuasion information. Cognitive resources, the availability of incentives, and persuasion competence have all been recognized as three precursors of persuasion knowledge.

Cognitive Assets

When processing resources are not limited, tactic-related cognitions, for instance, have more influence than claims-related cognitions because they require more work. They even suggested that using and activating persuasion knowledge calls for cognitive resources. They characterized the stimulation of persuasion knowledge as an examination of ulterior motives in a series of research on interpersonal persuasion between a salesman and a customer. They claimed that since inferences of motives necessitate higher order, attributional thinking, cognitive resources are necessary to employ persuasion expertise. The study showed that consumers are less likely to employ persuasion information in a marketplace transaction when processing resources are confined than when capabilities are unrestrained by using numerous perturbations of cognitive resources.

Accessibility of Motives

The availability of clear incentives is a significant predictor for applying persuasion knowledge, in addition to the crucial function of cognitive capacity. According to research, when malicious intentions are easily discovered, consumers are more likely to utilize knowledge of persuasion in a persuasion episode. Nevertheless, when hidden agendas are harder to discover, consumers are less likely to use knowledge of persuasion. Information regarding the firm’s business status, priming of motives or strategies, blatant use of persuasion techniques, agent expertise, and consumer goals can all improve the accessibility of hidden intentions. It has been demonstrated that the availability of motives interacts with mental skills to influence how persuasion knowledge is used. Customers may be more likely to employ persuasive skills even with limited cognitive resources due to high accessibility. For instance, targets with high motive availability may be more able to spot overt persuasion strategies like ingratiation.

Persuasion Expertise

The person’s persuasion skill is the third predictor of the usage of persuasion information established so far. Persuasion knowledge may be a persistent personality characteristics variable, even though it has frequently been considered a situational variable (i.e., engaged when internal abilities or accessibility of reasons are high). Differing degrees of persuasion competence may result from different experience levels; experience would be essential for acquiring persuasion knowledge. In support of this, studies show that older persons, who typically have more persuasive experience, employ their understanding of persuasion more sophisticatedly than younger people.

Disparities in persuasion knowledge quantity and content are likely influenced by experience. The idea that each person uses persuasion knowledge differently inspired the creation of a personality characteristics scale that assesses persuasion expertise as a component of consumer confidence. People have been divided into high and low PK groups using this scale, with noticeable behavioral differences. As a result, research suggests that persuasive knowledge may be episodically or permanently active. At this moment, further research is required to understand better how individuals and situations use persuasive information.

Installing inferences linked to persuasion in a given context may only sometimes follow from accumulating consumer persuasion knowledge. Consumers’ varying capacities to use their persuasive knowledge may result from several things, such as −

Individual Characteristics

The generation and sector of customer business roles have been recognized as two criteria influencing the ability to perceive the persuasive nature of marketing stimuli. Consumers who are concentrating on obtaining positive results are more likely to recognize the persuasive essence of sensory input than consumers who are focused on minimizing negative results, according to Kirmani and Zhu, who studied the role of locus of control (regulatory focus characterizes the individual’s tactic for achieving their goals).

Marketing Stimulation Characteristics Conditions of the Situation

Campbell and Kirmani have demonstrated that the degree to which a person behaves as a direct recipient or an observer of a persuasion episode affects the activation of their persuasion knowledge. The scenarios’ cognitive demands are different. The recipient typically uses more cognitive resources than an observer to solve issues that develop throughout the episode. As a result, the recipient will have the less cognitive capacity to make inferences about persuasion than the observer, who is more likely to be aware of such efforts than a firsthand participant in the moment of contact.


A consumer’s goals will likely affect how much persuasion knowledge is activated because consumers are goal-directed. Customers may have goals connected to persuasion, such as avoiding being persuaded or obtaining the most excellent offer, which could increase the possibility of using their understanding of persuasion. Goals’ impact on persuasive knowledge has yet to be extensively studied. It would also be helpful to pinpoint variables that precede the application of persuasion knowledge in conjunction with either topical or agent knowledge.

You're reading Antecedents To The Use Of Persuasion Knowledge

Knowledge Distillation: Theory And End To End Case Study

This article was published as a part of the Data Science Blogathon

on a business problem to classify x-ray images for pneumonia detection.

Image Source: Alpha Coders

What is Knowledge Distillation?

Knowledge Distillation aims to transfer knowledge from a large deep learning model to a small deep learning model. Here size is in the context of the number of parameters present in the model which directly relates to the latency of the model.

Knowledge distillation is therefore a method to compress the model while maintaining accuracy. Here the bigger network which gives the knowledge is called a Teacher Network and the smaller network which is receiving the knowledge is called a Student Network.

 (Image Source: Author, Inspired from Reference [6])

Why make the Model Lighter?

In many applications, the model needs to be deployed on systems that have low computational power such as mobile devices, edge devices. For example, in the medical field, limited computation power systems (example: POCUS – Point of Care Ultrasound) are used in remote areas where it is required to run the models in real-time. From both time(latency) and memory (computation power) it is desirable to have ultra-lite and accurate deep learning models.

But ultra-lite (a few thousand parameters) models may not give us good accuracy. This is where we utilize Knowledge Distillation, taking help from the teacher network. It basically makes the model lite while maintaining accuracy.

Knowledge Distillation Steps

Below are the steps for Knowledge distillation:

3) Train the student network intelligently in coordination with the teacher network: The student network is trained in coordination with the fully trained teacher network. Here forward propagation is done on both teacher and student networks and backpropagation is done on the student network. There are two loss functions defined. One is student loss and distillation loss function. These loss functions are explained in the next paragraph of this article.


Knowledge Distillation Mathematical Equations:

(Image Source: Author, Inspired from Reference [7])

Loss Functions for teacher and student networks are defined as below:

Teacher Loss LT: (between actual lables and predictions by teacher network)

LT = H(p,qT)

Total Student Loss LTS :

LTS =  α * Student Loss + Distallation Loss

LTS =  α* H(p,qs) + H(q̃T, q̃S) 


Distillation Loss = H(

q̃T, q̃S


Student Loss = H(p,qS)



: Loss function (Categorical Cross Entropy or KL Divergence)



and z


: pre-softmax logits

q̃T : softmax(zT/t)

q̃S: softmax(zS/t)

alpha (α) and temperature (t) are hyperparameters.

Temperature t is used to reduce the magnitude difference among the class likelihood values.

These mathematical equations are taken from reference [3].

End to End Case Study

Here we will look at a case study where we will implement the knowledge distillation concept in an image classification problem for pneumonia detection.

About Data:

The dataset contains chest x-ray images. Each image can belong to one of three classes:



Let’s get started!!

Importing Required Libraries:

import numpy as np import matplotlib.pyplot as plt import os import pandas as pd import glob import shutil import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dropout, MaxPool2D, BatchNormalization, Input, Conv2DTranspose, Concatenate from tensorflow.keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator import cv2 from sklearn.model_selection import train_test_split import random import h5py from IPython.display import display from PIL import Image as im import datetime import random from tensorflow.keras import layers

Downloading the data

The data set is huge. I have randomly selected 1000 images for each class and kept 800 images in train data, 100 images in the validation data, and 100 images in test data for each of the classes. I had zipped this and uploaded this selected data into my google drive.

S. No. Class Train Test Validation

1. Normal 800 800 800

2. BACTERIA 100 100 100

3. VIRUS 100 100 100

Downloading the data from google drive to google colab:

#downloading the data and unzipping it from google.colab import drive drive.mount('/content/drive') !unzip "/content/drive/MyDrive/" -d "/content/"

Visualizing the images

We will now look at some images from each of the classes.

for i, folder in enumerate(os.listdir(train_path)): for j, img in enumerate(os.listdir(train_path+"/"+folder)): filename = train_path+"/"+folder + "/" + img img= ax = plt.subplot(3,4,4*i+j+1) ax.set_xlabel(folder+ ' '+ str(img.size[0]) +'x'+ str(img.size[1])) plt.imshow(img, 'gray') ax.set_xlabel(folder+ ' '+ str(img.size[0]) +'x'+ str(img.size[1])) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) #plt.axis('off') img.close() break

So above sample images suggest that each x-ray image can be of a different size.

Creating Data Generators

We will use Keras ImageDataGenerator for image augmentation. Image augmentation is a tool to get multiple transformed copies of an image. These transformations can be cropping, rotating, flipping. This helps in generalizing the model. This will also ensure that we get the same size (224×224) for each image. Below are the codes for train and validation data generators.

def trainGenerator(batch_size, train_path): datagen = ImageDataGenerator(rescale=1. / 255, rotation_range=5, shear_range=0.02, zoom_range=0.1, brightness_range=[0.7,1.3], horizontal_flip=True, vertical_flip=True, fill_mode='nearest') train_gen = datagen.flow_from_directory(train_path, batch_size=batch_size,target_size=(224, 224), shuffle=True, seed=1, class_mode="categorical" ) for image, label in train_gen: yield (image, label)

def validGenerator(batch_size, valid_path):

datagen = ImageDataGenerator(rescale=1. / 255, )

valid_gen = datagen.flow_from_directory(valid_path, batch_size=batch_size, target_size=(224, 224),shuffle=True, seed=1 )

for image, label in valid_gen:

yield (image, label)

Model 1: Teacher Network

Here we will use the VGG16 model and train it using transfer learning (based on the ImageNet dataset).

We will first define the VGG16 model.

from tensorflow.keras.applications.vgg16 import VGG16

base_model = VGG16(input_shape = (224, 224, 3), # Shape of our images

weights = ‘imagenet’)

Out of the total layers, We will make the first 8 layers untrainable:


for layer in base_model.layers[:8]:

layer.trainable = False

x = layers.Flatten()(base_model.output) # Add a fully connected layer with 512 hidden units and ReLU activation x = layers.Dense(512, activation='relu')(x) #x = layers.BatchNormalization()(x) # Add a dropout rate of 0.5 x = layers.Dropout(0.5)(x) x = layers.Dense(3)(x) #linear activation to get pre-soft logits model = tf.keras.models.Model(base_model.input, x) opti = Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001) model.summary()

As we can see, there are 27M parameters in the teacher network.

One important point to note here is that the last layer of the model does not have any activation function (i.e. it has default linear activation). Generally, there would be a softmax activation function in the last layer as this is a multi-class classification problem but here we are using the default linear activation function to get pre-softmax logits. Because these pre-softmax logits will be used along with the student network’s pre-softmax logits in the distillation loss function.

Hence, we are using from_logits = True in the CategoricalCrossEntropy loss function. This means that the loss function will calculate the loss directly from the logits. If we had used softmax activation, then it would have been from_logits = False.

We will now define a callback for the early stopping of the model and run the model.

Running the model:

earlystop = EarlyStopping(monitor='val_acc', patience=5, verbose=1) filepath="model_save/weights-{epoch:02d}-{val_accuracy:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks = [earlystop ] vgg_hist =, validation_data = validation_generator, validation_steps=10, steps_per_epoch = 90, epochs = 50, callbacks=callbacks)

Checking the accuracy and loss for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(vgg_hist.history['acc']) plt.plot(vgg_hist.history['val_acc']) plt.title('teacher model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(vgg_hist.history['loss']) plt.plot(vgg_hist.history['val_loss']) plt.title('teacher model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right')

Now we will evaluate the model on the test data:

# First, we are going to load the file names and their respective target labels into a numpy array!

from sklearn.datasets import load_files import numpy as np test_dir = '/content/test' def load_dataset(path): data = load_files(path) files = np.array(data['filenames']) targets = np.array(data['target']) target_labels = np.array(data['target_names']) return files,targets,target_labels x_test, y_test,target_labels = load_dataset(test_dir) from keras.utils import np_utils y_test = np_utils.to_categorical(y_test,no_of_classes) # We just have the file names in the x set. Let's load the images and convert them into array. from keras.preprocessing.image import array_to_img, img_to_array, load_img def convert_image_to_array(files): images_as_array=[] for file in files: # Convert to Numpy Array images_as_array.append(tf.image.resize(img_to_array(load_img(file)), (224, 224))) return images_as_array x_test = np.array(convert_image_to_array(x_test)) print('Test set shape : ',x_test.shape) x_test = x_test.astype('float32')/255 # Let's visualize test prediction. y_pred_logits = model.predict(x_test) y_pred = tf.nn.softmax(y_pred_logits) # plot a raandom sample of test images, their predicted labels, and ground truth fig = plt.figure(figsize=(16, 9)) for i, idx in enumerate(np.random.choice(x_test.shape[0], size=16, replace=False)): ax = fig.add_subplot(4, 4, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_test[idx])) pred_idx = np.argmax(y_pred[idx]) true_idx = np.argmax(y_test[idx]) ax.set_title("{} ({})".format(target_labels[pred_idx], target_labels[true_idx]), color=("green" if pred_idx == true_idx else "red"))

Calculating the accuracy of the test dataset:

print(model.metrics_names)  loss, acc = model.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss)  print('test accuracy = ',acc)


Model 2 –S

The student network defined here has a series of 2D convolutions and max-pooling layers just like our teacher network VGG16. The only difference is that number of Convolutions filters in the student network is very less in each layer as compared to the teacher network. This would make us achieve our goal to have a very less number of weights (parameters) to be learned in the student network during training.

Defining the student network:

# import necessary layers  

from tensorflow.keras.layers import Input, Conv2D 

from tensorflow.keras.layers import MaxPool2D, Flatten, Dense, Dropout from tensorflow.keras import Model # input input = Input(shape =(224,224,3)) # 1st Conv Block x = Conv2D (filters =8, kernel_size =3, padding ='valid', activation='relu')(input) x = Conv2D (filters =8, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 2nd Conv Block x = Conv2D (filters =16, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =16, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 3rd Conv block x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 4th Conv block x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 5th Conv block x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # Fully connected layers x = Flatten()(x) #x = Dense(units = 1028, activation ='relu')(x) x = Dense(units = 256, activation ='relu')(x) x = Dropout(0.5)(x) output = Dense(units = 3)(x) #last layer with linear activation # creating the model s_model_1 = Model (inputs=input, outputs =output) s_model_1.summary()

Note that the number of parameters here is only 296k as compared to what we got in the teacher network (27M).

Now we will define the distiller. Distiller is a custom class that we will define in Keras in order to establish coordination/communication with the teacher network.

This Distiller Class takes student-teacher networks, hyperparameters (alpha and temperature as mentioned in the first part of this article), and the train data (x,y) as input. The Distiller Class does forward propagation of teacher and student networks and calculates both the losses: Student Loss and Distillation Loss. Then the backpropagation of the student network is done and weights are updated.

Defining the Distiller:

class Distiller(keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.teacher = teacher self.student = student def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, alpha=0.5, temperature=2, ): """ Configure the distiller. Args: optimizer: Keras optimizer for the student weights metrics: Keras metrics for evaluation student_loss_fn: Loss function of difference between student predictions and ground-truth distillation_loss_fn: Loss function of difference between soft student predictions and soft teacher predictions alpha: weight to student_loss_fn and 1-alpha to distillation_loss_fn temperature: Temperature for softening probability distributions. Larger temperature gives softer distributions. """ self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.alpha = alpha self.temperature = temperature def train_step(self, data): # Unpack data x, y = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) #model = ... # create the original model teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student # Forward pass of student student_predictions = self.student(x, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) loss = self.alpha * student_loss + distillation_loss # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()`. # Return a dict of performance results = { m.result() for m in self.metrics} results.update( {"student_loss": student_loss, "distillation_loss": distillation_loss} ) return results def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Calculate the loss student_loss = self.student_loss_fn(y, y_prediction) # Update the metrics. # Return a dict of performance results = { m.result() for m in self.metrics} results.update({"student_loss": student_loss}) return results

Now we will initialize and compile the distiller. Here for the student loss, we are using the Categorical cross-entropy function and for distillation loss, we are using the KLDivergence loss function.

KLDivergence loss function is used to calculate the distance between two probability distributions. By minimizing the KLDivergence we are trying to make student network predict similar to teacher network.

Compiling and Running the Student Network Distiller:

# Initialize and compile distiller distiller = Distiller(student=s_model_1, teacher=model) optimizer=Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001), metrics=['acc'], student_loss_fn=CategoricalCrossentropy(from_logits=True), distillation_loss_fn=tf.keras.losses.KLDivergence(), alpha=0.5, temperature=2, ) # Distill teacher to student distiller_hist =, validation_data = validation_generator, epochs=50, validation_steps=10, steps_per_epoch = 90)

Checking the plot of accuracy and loss for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(distiller_hist.history['acc']) plt.plot(distiller_hist.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(distiller_hist.history['student_loss']) plt.plot(distiller_hist.history['val_student_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right') plt.tight_layout()

Checking accuracy on the test data:

print(distiller.metrics_names) acc, loss = distiller.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss) print('test accuracy = ',acc)

We have got 74% accuracy on the test data. With the teacher network, we had got 77% accuracy. Now we will change the hyperparameter t, to see if we can improve the accuracy in the student network.

Compiling and Running the Distiller with t = 6:

# Initialize and compile distiller distiller = Distiller(student=s_model_1, teacher=model) optimizer=Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001), metrics=['acc'], student_loss_fn=CategoricalCrossentropy(from_logits=True), #distillation_loss_fn=CategoricalCrossentropy(), distillation_loss_fn=tf.keras.losses.KLDivergence(), alpha=0.5, temperature=6, ) # Distill teacher to student distiller_hist =, validation_data = validation_generator, epochs=50, validation_steps=10, steps_per_epoch = 90)

Plotting the loss and accuracy for each epoch:

import matplotlib.pyplot as plt plt.figure(1)

# summarize history for accuracy  




plt.title('model accuracy')  



plt.legend(['train', 'valid'], loc='lower right')  

 # summarize history for loss  




plt.title('model loss')  



plt.legend(['train', 'valid'], loc='upper right')


Checking the test accuracy:

print(distiller.metrics_names) acc, loss = distiller.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss) print('test accuracy = ',acc)

With t = 6, we have got 75% accuracy which is better than what we got with t = 2.

This way, we can do more iterations by changing the values of hypermeters alpha (α) and temperature (t) in order to get better accuracy.

Model 3: Student Model without Knowledge Distillation

Now we will check the student model without Knowledge Distillation. Here there will be no coordination with the teacher network and there will be only one loss function i.e. Student Loss.

The student model remains the same as the previous ithout distillation.

Compiling and running the model:

opti = Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, ep , decay=0.001) earlystop = EarlyStopping(monitor='val_acc', patience=5, verbose=1) filepath="model_save/weights-{epoch:02d}-{val_accuracy:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks = [earlystop ] s_model_2_hist =, validation_data = validation_generator, validation_steps=10, steps_per_epoch = 90, epochs = 50, callbacks=callbacks)

Our model stopped in 13 epochs as we had used early stop callback if there is no improvement in validation accuracy in 5 epochs.

Plotting the loss and accuracy for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(s_model_2_hist.history['acc']) plt.plot(s_model_2_hist.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(s_model_2_hist.history['loss']) plt.plot(s_model_2_hist.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right') plt.tight_layout()

Checking the Test Accuracy:


loss, acc = s_model_2.evaluate(x_test, y_test, verbose = 1)

print(‘test loss = ‘, loss)

print(‘test accuracy = ‘,acc)

Here we are able to achieve 64% accuracy on the test data.

Result Summary:

Below is the comparison of all four models that are made in this case study:

S. No. Model No. of Parameters Hyperparameter Test Accuracy

1 Teacher Model 27 M – 77%

2 Student Model with Distillation 296 k α = 0.5, t = 2 74%

3 Student Model with Distillation 296 k α = 0.5, t = 6 75%


Student Model without Distillation

296 k – 64%

As seen from the above table, with Knowledge distillation, we have achieved 75% accuracy with a very lite neural network. We can play around with the hypermeters α and t to improve it further.


In this article, we saw that Knowledge Distillation can compress a Deep CNN while maintaining the accuracy so that it can be deployed on embedded systems that have less storage and computational power.

We used Knowledge Distillation on the Pneumonia detection problem from x-ray images. By distilling Knowledge from a Teacher Network having 27M parameters to a Student Network having only 0.296M parameters (almost 100 times lighter), we were able to achieve almost the same accuracy. With more hyperparameter iterations and ensembling of multiple students networks as mentioned in reference [3], the performance of the student model can be further improved.


1) Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning 2023.

2) Dataset: Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2024), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, doi: 10.17632/rscbjbr9sj.2

3) Designing Lightweight Deep Learning Models for Echocardiography View Classification 2023.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 


How To Use All The Features Of Skype Without An Account Or Download.

Skype is the original and still most used VOIP program available on the internet. Since its creation back in 2003 Skype has evolved across many platforms and changed ownership just as many times, to now finally rest in the firm grasp of Microsoft. Skype has traditionally always been available as a download in the form of App or Program for Windows, Android, and iOS systems. Until now!

How to Get Facebook Messenger Lite. (Bloat Free Version Of Facebook Messenger)

Today Microsoft released an alternative to the quite often slow and bulky software version of its popular Skype Platform. Users can now use Skype directly from their browser, without even needing to sign up for an account or sign in using an existing account. Which is going to save a lot of time, especially if you only use Skype on odd occasions. 

The new system works quite simply, a person starts a chat and is given a URL to the chat, they then forward the URL to the person or persons they wish to connect with. The user who receives the invitation by URL does not need to have Skype installed on their computer, nor do they need to have a Skype account. Also through these new conversations, it is still possible to share files, emoticons, and pictures, in the same manner you would on the installed version. Another great little feature is that it allows up to a maximum of 25 members, for video calls and voice calls and up to 300 users for text conversations (chat).

How to Create and Invite People to Skype Web Browser Conversations,

Something worth mentioning is that if you are trying to access a Skype URL conversation from your smartphone or tablet, you will still need to download or log into the app. This feature is currently limited to Computer web browsers. Apart from this, the only other downside of this service is that the Skype chat interface is only currently available in English.

You’re reading this in English anyway so it’s not going to affect you, however, if you are trying to send the link to relatives or friends who don’t read English it may get a little problematic. The last thing to consider with this service is that conversations will only be active for 24 hours from the moment the link has been created.

You May Also Be Interested In: How to Root Your Android Device a Much Easier Way.

Note: After 24 hours, the conversation will disappear completely and you will need to generate a new link to start another new conversation. This new feature is a great for those of us who hate signing up to and installing programs we only use on the odd occasion.

The Offical Skype Launch and Instruction Method Below.

Use Hazeover For Mac To Get Rid Of Distractions

HazeOver is a small piece of software that does one job really well. In the old days we used to call software like that a “hamster” after the way those tiny rodents single-mindedly twirl on their wheel all day.

What HazeOver does is enhance your focus on the job at hand by dimming the rest of your desktop ahd giving focus in the true sense of the word to the app you are using.

In this review we’ll look at purchasing, installing and using HazeOver on your Mac and see whether it really does do this one job really well.

Haze Filter

To obtain HazeOver, go to the Mac App Store and type “hazeover” or follow this link. After purchasing the app for $3.99, it is installed in your Applications folder directly. Once the software is installed, you run it in the usual way. It will stay residently parked in the menu bar until you quit.

The preferences pane pops up on the first start, and you get to choose the level of dimming and if you want it in the menu bar and at the start at login. Also, there is a button to allow the app in the security permissions so it doesn’t ask you every time if it’s okay for the app to mess with your screen settings.

Once you are done messing with the settings, you can close the Preferences and get started. As you do so, a little alert will pop up to remind you that the app will stay running in the menu bar.

Pulling Focus

There are no controls to speak of apart from in the preferences (and the menu bar menu) to allow you to adjust the darkness of the dim. In practice, you need far less than you would assume, so go with the defaults at first and tune it down to taste.

It’s a deceptively simple thing, but it works. Basically what happens when the software is in play is that any window you are working on looks as normal. Anything in the background is dimmed out. Sounds simple when you say it like that, doesn’t it? In fact, the difference, perceptually speaking, is huge, and you really do pay less attention to what’s going on behind the apps you are working on.

Anything that pops up is less clear and so much easier to ignore unless you are the sort of person who is biochemically obsessed with knowing everything that’s going on. If you actually want to ignore everything else other than your chosen task, HazeOver really does help.

You might get slightly irritated by the way focus changes, but they’ve done a pretty good job of minimizing any annoying scene changes. In the test period, everything worked as it should, and there were no weird transitions between windows or views. Overall it was really solid.

Fade to Black

HazeOver is a paid app, but that shouldn’t upset you. It’s easy to get hypnotized by free stuff and the prevailing going rate of apps in the store and start thinking that paying more than a few cents for an app is outrageous overpricing, but let’s be real. Software, good software you actually use, is made by real people who need to eat. And the price in this case is reasonable.

With that out of the way, the price is right because this is a piece of software you will actually use. Frankly it’s something that should be an option in the OS anyway, and it’s possible in the future it might be. Until then this is a cheap and user-friendly option.

In the interest of full disclosure, we should mention that although our copy of HazeOver was provided free by the manufacturer, this in no way affects our honest evaluation of the software, and the developers were happy for us to review this product on that basis in our own words.

Phil South

Phil South has been writing about tech subjects for over 30 years. Starting out with Your Sinclair magazine in the 80s, and then MacUser and Computer Shopper. He’s designed user interfaces for groundbreaking music software, been the technical editor on film making and visual effects books for Elsevier, and helped create the MTE YouTube Channel. He lives and works in South Wales, UK.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

9 Ways To Make Better Use Of Gmail Filters

Gmail filters is a good way for you to set rules for your email and allow you to organize your inbox without you doing the manual work. Here are a few ways that you can make good use of Gmail Fliters

1. Forward all incoming emails and archive them

If you have multiple Gmail accounts, and you really only use one of them, a good way to manage all your emails is to forward all the emails from all the accounts to the primary account. In addition, you can mark the emails as read and archive all of them, so they won’t show up in the inbox.

To accomplish this, create a new filter with @ in the “From” field. (It will work with * as well)

In the next section, you can select “Forward it” as well as “Mark as read” (or “Archive”).

2. Auto-reply to Email Using Canned Responses

If you always receive emails from the same user, or have the same pattern (like questions on a particular topic, guest post request, etc), you can compose a canned response and create a filter to auto-reply with the canned response for such emails.

Here’s the guide to setup the filter to autoreply with canned response.

3. Sort attachment of various size

The standard filter allows you to select emails with attachment. You can further improve this filter by specifying the attachment size. For example, to add a label “big file attachment” to emails with attachment of more than 10MB:

1. In the search bar at the top of Gmail, type:

Yet another way to customize this filter is to sort attachment according to their file types, something like:

4. Configure tabbed inbox emails

For the new tabbed inbox interface in Gmail, Google is the one that decides which email go into which category/tab. By using the filters, you can set your own rule to override the default settings, like emails from Twitter will go directly into “Primary” instead of “Social” tab.

5. Quickly organize old emails into label

After you have created a filter, it will only work for future incoming emails. If you already have thousands of emails in your inbox and you only just started out using filters, there is an option for you to quickly apply the filter to all the emails in your inbox.

6. Export/Import Filter to new Gmail account

If you have multiple Gmail accounts and you want all of them to have the same set of filters, instead of creating each filter one-by-one in all the accounts, you can simply export from one account and import to other Gmail accounts.

7. Send a to-do list to yourself and auto assign it to label

If you have the habit of recording down the things that you need to do, you can simply email yourself with the subject “TODO” and make use of a filter to add “TODO” label to the email.

8. Use Gmail as a RSS reader

Using a service like IFTTT, you can easily convert a RSS into an email and have it delivered to your inbox. You can then make use of a filter to assign “RSS” label to the email.

9. Create a disposable email address with the “+” alias

For this to work, you have to create the filter:

Action: Delete It Conclusion

There are tons of ways to make use of Gmail filters and we have only scratched the surface. Do share with us the cool filters you have come up with and innovative ways to make use of them.

Image credit: file folder with slots for household expenses by BigStockPhoto


Damien Oh started writing tech articles since 2007 and has over 10 years of experience in the tech industry. He is proficient in Windows, Linux, Mac, Android and iOS, and worked as a part time WordPress Developer. He is currently the owner and Editor-in-Chief of Make Tech Easier.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Use Of Artificial Intelligence In 2023

This means that 2023 will be an important year for the next decade of innovations in the AI ​​space to set the tone and continue the current momentum. But what does this mean for organizations selling and buying AI solutions? In which areas should they invest?

Forrester’s various surveys say,

53% of international analytics and data decision makers say they’ve executed, are in the process of executing, or are updating or updating their execution of some kind of artificial intelligence.

29% of whole IT developers (manager level or higher) have worked on Artificial Intelligence/machine learning (ML) software in the past year.

Also read: Top 5 Automation Tools to Streamline Workflows for Busy IT Teams

In 2023, Forrester predicts that

25 % of the Fortune 500 will include AI construction blocks (e.g. text analytics and machine learning) for their Robotic Process Automation (RPA) attempts to make countless new Intelligent procedure automation (IPA) usage cases. “RPA wants intelligence and AI wants automation to climb,” says Forrester.

As a quarter of Fortune 500 enterprises redirects Artificial Intelligence investments to more mundane shorter-term or strategic IPA jobs with”crystal-clear performance gains,” roughly half of their AI platform suppliers, international systems integrators, and managed service providers will highlight IPA in their own portfolios.

Building on the proven success of those IPA use instances, IDC forecasts that by 2023, 75 percent of businesses will automate intelligent automation to technology and procedure development, utilizing AI-based applications to detect functional and experiential insights to direct innovation.

And from 2024, AI is going to be integral to each area of the company, leading to 25 percent of the total spend on AI options as”Outcomes-as-a-service” that induce innovation in scale and superior business value.

AI will become the newest UI by redefining user encounters where more than 50 percent of consumer handles will likely be bolstered with computer vision, language, natural language and AR/VR. During the upcoming several decades, we’ll see AI as well as the emerging consumer interfaces of computer vision, natural language processing, and gesture, embedded in every form of merchandise and device.

Emerging technologies are high-tech technology. Back in 2023, warns Forrester, 3 high profile PR disasters will”rattle reputations,” because the prospective regions for AI malfunction and injury will multiply: The spread of deep fakes, erroneous usage of facial recognition, and over-personalization. From 2023, forecasts IDC, 15 percent of consumer experience software will be always hyper personalized by blending an assortment of information and newer reinforcement learning algorithms.

Accentuating the positive, Forrester is nonetheless confident that”those imbroglios will not impede AI adoption strategies following year. Rather, they will underline the significance of testing, designing, and deploying accountable AI systems — with solid AI governance which believes prejudice, equity, transparency, explainability, and responsibility.”

IDC forecasts that by 2023, maybe as a consequence of a couple of high-profile PR disasters, over 70 percent of G2000 companies have formal applications to track their’electronic trustworthiness’ as electronic hope becomes a critical corporate asset.

Leadership issues, says Forrester, and employers using main data officers (CDOs) are about 1.5 times more likely to utilize AI, ML, or profound learning because of their insights initiatives compared to people with no CDOs.

In 2023, senior executives such as main analytics and data officials (CDAOs) and CIOs that are seriously interested in AI will see that data science groups have what they want with regard to information. The actual difficulty, says Forrester, is”sourcing information from an intricate portfolio of software and persuasive various data gatekeepers to say .”

AI adoption isn’t consistent across all businesses and we’re seeing a brand new digital divide, a split between the AI haves along with the AI have-nots, people without or with the essential highly-skilled engineers.

In 2023, says Forrester the “tech elite” will creep up AI and design abilities while others will”fumble.” Pairing human-centered design abilities and AI development capabilities will be crucial. In terms of the remainder of the workforce, by 2024, 75 percent of businesses will invest in worker retraining and growth, such as third party providers, to tackle new skill requirements and means of functioning resulting in AI adoption, predicts IDC.

What makes”the workforce” will continue to enlarge and IDC forecasts that the IT company will manage and support a growing workforce of AI-enabled RPA robots as smart automation scales throughout the enterprise. The following addition to this workforce is a military of chatbots, helping with many different jobs from the enterprise.

However, Forrester forecasts that four in every five conversational AI interactions will continue to fail to pass the Turing Test. From the end of 2023, predicts Forrester, conversational AI will nevertheless electricity fewer than one in five effective customer support interactions.

AI is here, there, and everywhere, and IDC estimates that by 2025, at least 90 percent of new venture program releases include embedded AI functionality. But adds IDC, really disruptive AI-led software will represent only about 10 percent of the total.

So we must wait for another 5 years to observe that the”really tumultuous” possibility of AI finally understood and just in a couple of circumstances? Another Forrester forecasts report really warns that in 2023, “the exuberance in AI will crescendo as expectations return to earth.” While Forrester predicts another new summit in AI financing in 2023, it claims that it is going to be the previous one–“with over 2,600 businesses worldwide, the AI startup ecosystem is a market.”

The most critical sign of the coming downturn, according to Forrester, is that the simple fact that 20 AI businesses have increased unicorn-sized funding rounds before 12 months. “This can’t be sustainable,” says Forrester. That reminds me of Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds: “The bubble was subsequently full-blown and started to quiver and shake preparatory to its exploding.”

Update the detailed information about Antecedents To The Use Of Persuasion Knowledge on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!