Data Engi­neer — 100%

Vollzeit @ITech Con­sult AG veröffentlicht 3 Tagen ago

Job-Beschreibung

Data Engi­neer (m/f/d)%

Back­ground:
As a Seni­or Data Engi­neer at Dia­gno­stics Ope­ra­ti­ons Rot­kreuz, you will lead the deve­lo­p­ment and opti­miza­ti­on of a robust data infra­struc­tu­re that dri­ves our DOD ope­ra­ti­ons and data pro­ducts. In this stra­te­gic role, you’ll work clo­se­ly with mul­ti­di­sci­pli­na­ry teams — inclu­ding data sci­en­tists, sub­ject mat­ter experts, and fel­low data engi­neers — to design, trans­form, and enhan­ce data assets that are foun­da­tio­nal to our advan­ced model­ling approa­ches. You’ll take a lead role in sha­ping and sca­ling data pro­duct stan­dards across DIA Glo­bal Ope­ra­ti­ons, ensu­ring ali­gnment, qua­li­ty, and impact at the orga­ni­sa­tio­nal level. Bey­ond hands-on con­tri­bu­ti­ons, you’ll pro­vi­de tech­ni­cal gui­dance to juni­ors and fel­low data engi­neers ali­ke, estab­li­shing best prac­ti­ces and fos­te­ring a cul­tu­re of inno­va­ti­on.

The per­fect can­di­da­te
The per­fect can­di­da­te has a Master’s degree in Com­pu­ter Sci­ence or Data Engi­nee­ring and brings over 5 years’ expe­ri­ence in data engi­nee­ring with a track record in archi­tec­ting and sca­ling lar­ge data sys­tems. Also we are loo­king for someone with pro­ven expe­ri­ence in buil­ding and mana­ging data pipe­lines and pro­ducts.

Tasks & Respon­si­bi­li­ties:
Lead Data Pipe­lines & Sto­rage: Design and build sca­lable data pipe­lines for real-time and batch pro­ces­sing. Dri­ve archi­tec­tu­ral decis­i­ons and long-term plan­ning for sca­lable, FAIR data pro­ducts.
High-Qua­li­ty Data Pro­ducts: Crea­te high-qua­li­ty data pro­ducts adhe­ring to FAIR prin­ci­ples. Address com­plex chal­lenges, ensu­re com­pli­ance, and make stra­te­gic decis­i­ons to shape
data road­maps.
Col­la­bo­ra­ti­on & Inte­gra­ti­on: Model data land­scapes, acqui­re data extra­cts, and defi­ne secu­re exch­an­ge methods in col­la­bo­ra­ti­on with experts and cross-func­tion­al teams.
Data Inges­ti­on & Pro­ces­sing: Ingest and pro­cess data from diver­se sources into Big Data plat­forms (e.g., Snow­fla­ke). Deve­lop ERDs and reusable pipe­lines for advan­ced ana­ly­tics.
Tech­ni­cal Gui­dance & Gover­nan­ce: Con­tri­bu­te to our Data Mesh Engi­nee­ring Coll­ec­ti­ve to estab­lish data gover­nan­ce stan­dards, ensu­re regu­la­to­ry com­pli­ance and data secu­ri­ty. Men­tor others and pro­mo­te best prac­ti­ces.
Infor­ma­ti­on Secu­ri­ty & Infra­struc­tu­re Col­la­bo­ra­ti­on: Ensu­re adhe­rence to infor­ma­ti­on secu­ri­ty stan­dards. Col­la­bo­ra­te with infra­struc­tu­re teams for tail­o­red tech stacks. Make inde­pen­dent decis­i­ons on data stra­te­gies.
Inno­va­ti­on & Know­ledge Sha­ring: Shape the data engi­nee­ring road­map and set stan­dards for data qua­li­ty and gover­nan­ce. Proac­tively share best prac­ti­ces.
Tech­ni­cal Pro­fi­ci­en­cy: Main­tain pro­fi­ci­en­cy in data engi­nee­ring tech stacks, data qua­li­ty, and obser­va­bi­li­ty tools (e.g., Ata­c­ca­ma, Mon­te Car­lo).
Adhe­rence to Stan­dards: Ensu­re com­pli­ance with rele­vant gui­de­lines and data gover­nan­ce stan­dards. Deve­lop long-term enter­pri­se tools.

Must Have:
Master’s degree in Com­pu­ter Sci­ence, Data Engi­nee­ring, or a rela­ted field.
Over 5 years in data engi­nee­ring with a track record in archi­tec­ting and sca­ling lar­ge data sys­tems.
5+ years of expe­ri­ence in lea­ding and men­to­ring data engi­neers.
Pro­ven expe­ri­ence in buil­ding and mana­ging data pipe­lines and pro­ducts.
Skil­led in hand­ling struc­tu­red, semi-struc­tu­red, and unstruc­tu­red data.
Pro­fi­ci­en­cy in Python, Java, SQL, or Sca­la, and expe­ri­ence with big data tech­no­lo­gies (e.g., Hadoop, Spark).
Exper­ti­se in mul­ti­ple cloud plat­forms (AWS, Azu­re, GCP) and data warehousing tech­no­lo­gies (pre­fer­a­b­ly Snow­fla­ke).
Deep under­stan­ding of Infor­ma­ti­on Secu­ri­ty to ensu­re com­pli­ant hand­ling and manage­ment of pro­cess data.
Fami­lia­ri­ty with data mode­ling and ETL tools.
Know­ledge of ver­si­on con­trol sys­tems like Git and CI/CD pipe­lines.
Pro­fi­ci­en­cy in imple­men­ting robust test­ing prac­ti­ces and moni­to­ring pipe­lines for per­for­mance, relia­bi­li­ty, and data qua­li­ty.
Cli­ent-facing pro­ject expe­ri­ence.
Pro­ven abili­ty to com­mu­ni­ca­te com­plex solu­ti­ons to varied tech­ni­cal audi­en­ces.
Strong orga­niza­tio­nal and inter­per­so­nal skills for deli­ve­ring results and opti­mi­zing resour­ces.
Abili­ty to work inde­pendent­ly and col­la­bo­ra­tively within a team envi­ron­ment.
Strong abili­ty to influence

Nice to Have:
3+ years of expe­ri­ence in the phar­maceu­ti­cal or health­ca­re indus­try.
Expe­ri­ence with REST APIs and inte­gra­ting data from various sources.
Know­ledge of regu­la­to­ry requi­re­ments (e.g., GMP, FDA) and Qua­li­ty sys­tems.
Expe­ri­ence with AI-dri­ven data solu­ti­ons and machi­ne lear­ning pipe­lines.
Expe­ri­ence with ML plat­forms (e.g., Data­iku).
Know­ledge of soft­ware engi­nee­ring best prac­ti­ces (code reviews, test­ing, main­taina­bi­li­ty).

Refe­rence nr.:SDA
Role: Data Engi­neer 100%
Indus­try: Phar­ma
Work­place: Rot­kreuz
Workload: 100%
Ear­liest start date: ASAP
Latest start date:
Duration:(Possible Exten­si­on)

If you are inte­res­ted in this posi­ti­on, plea­se send us your com­ple­te CV. If this posi­ti­on does not match your pro­fi­le and you would like to app­ly for ano­ther posi­ti­on direct­ly, you can also send us your dos­sier via this advert or to jobs at itcag dot com. Cont­act us for more infor­ma­ti­on about our com­pa­ny, our posi­ti­ons or our attrac­ti­ve pay­roll-only pro­gram­me:
About us:
ITech Con­sult is an ISO 9001:2015 cer­ti­fied Swiss com­pa­ny with offices in Ger­ma­ny and Ire­land. ITech Con­sult spe­cia­li­ses in the pla­ce­ment of high­ly qua­li­fied can­di­da­tes for recruit­ment in the IT, Life Sci­ence & Engi­nee­ring sec­tors. We offer recruit­ment & pay­roll ser­vices. This is free of char­ge for our can­di­da­tes and we do not char­ge any addi­tio­nal fees for pay­roll ser­vices.

jidf471a93a jit0832a jiy25a

Verwandte Jobs