Career-Climb: An Application To Support Students Drom University In Searching Job Opportunities And Applications
Abstract
Nowadays, technology is crucial in modern life for its convenience and ease of use.
However, technology applications for careers should be more accessible and user-friendly for
everyone. It is essential to develop an application with a personalized experience and
comprehensive support functions for both employers and job seekers. Online job searching
platforms play a crucial role in connecting employees and employers, as well as in predicting
an employee’s character through their answers to given questions.
This thesis focuses on developing a job-search application on a web platform for easy
access from anywhere with an internet connection. The solution involves using technologies
from the NodeJS library and implementing a personality prediction model for job seekers.
Predictive modeling is a mathematical process used to forecast future events or
outcomes by analyzing patterns in a given set of input data. This process relies on users
providing measurements and filling out forms on the website. Subsequently, an algorithm can
be implemented using the user's form-filling information(Myers–Briggs Type Indicator).
Furthermore, the NodeJS library offers numerous resources to support the ultimate
website construction. This greatly aids developers in research and saves time in building user
interfaces.
Together, these technologies provide a robust toolkit for creating a job-finding
website, enabling developers to build scalable, efficient, and secure applications that deliver a
personalized experience to users. The method involves the following steps:
Data discovery: Kaggle provides a wide range of reliable datasets for developers to
train useful models for their applications. Developers can easily explore and access datasets
on Kaggle that cover various topics such as character analysis.
(The link to the original dataset[24]:
https://www.kaggle.com/datasets/anshulmehtakaggl/60k-responses-of-16-personalities-test-m
bt/data)
Data Preprocessing: Once collected, the data must undergo preprocessing before
analysis. This includes cleaning the data, removing duplicates, and converting it into an easily
analyzable format.
Feedback and Improvement: The final step involves continuously gathering user
feedback and enhancing the prediction model based on this feedback. This ensures that the
model provides accurate predictions and timely advice to users.