Joined on: 2017-12-09
Białystok, Poland
SKILLS: data analysisdjangopythonselenium

Good knowledge and usage of technologies: - Backend: Python, Django, 3rd party libraries of Python and Django, DRF - Databases: MySQL, PostgreSQL, MongoDB - Deployment: virtualenv, pip, Bash, Nginx, uWSGI, Gunicorn, supervisor - Testing: Selenium, pytest, unittest - NLP: NLTK

Good knowledge and usage of tools to support work on the projects: - Repository: git, git flow - Management: BitBucket, GitHub, Google Docs, LaTeX, Jira, Google Calendar - Communication: IRC, Slack, Stack overflow - Celery, RabbitMQ, Redis - Docker - Rancher

Operating Systems: - GNU / Linux - Mac OS X

In addition, during my years at University I was interested and acquired knowledge of: - OpenCV (computer system that identifies people based on facial features, written in Python using Numpy and Haarcascades classifier)

- PySpark (server log analysis from the studio, writing and running a script in Python that for a given user count: total times its operations on each unique server (host), the percentage of time all operations on each of the hosts)

- Google App Engine (application using at least 3 services offered by the platform, including database service that stores at least two types of entities related in a "one to many", written in Python, based on the MVT pattern)

- MasterWorker (writing a distributed application in Python, running in Condor environment, using a library Master Worker, which could find the "lost" password - you have given the hash generated by the MD5 algorithm)

Additional activities: - Participant at PyStok (meetings of Bialystok’s Python community) - long-distance running (half marathons, marathons) - Instructor of sports (swimming and karate) - Teacher / lifeguard at youth camps (travel agency Funclub)


Projects: - Universal PubMed Crawler Crawling scientific publications in order to find as many freely available scientific articles as it's possible. Python project based on asynchronous tasks (Celery), web architecture (Django framework) and REST API, all done under the Docker microservices so that it can be easily run and scale by anyone with minimal technical knowledge.

- VHA Crawling/Scraping pages in order to find as many medical products and its attributes (for classifiers purposes) as its possible. Python project based on Scrapy crawling engine, Redis for coordinating crawling requests and MongoDB/GridFS for storing crawled files along with their metadata.

- Discharge Director Unit/Integration tests for a post-acute care design solution (Python/Django web based application) that helps discharging clinicians take their patients all the way to health restoration. Discharge Director computes the right combination of named post-acute providers and requisite support services (medical + social) to help a patient get well quickly and stay well at the lowest possible cost outlay.

Generally: - Python/Django based applications (MySQL, PostgreSQL, MongoDB). - DRF (Django Rest Framework) based REST APIs. - Crawling web pages using Python and Celery (RabbitMQ and Redis as broker). - Writing unit tests, functional tests using Selenium. - Using Sphinx/apiDoc for documenting projects. - Using Plotly for data visualization. - Using Docker as a development environment. - Using Jenkins for Continuous Integration/Delivery. - Using JIRA for agile project management. - Using Jupyter for sharing interactive data. - Using NLP technologies for data mining/extracting: Grobid, Tika, NLTK.

No reviews.