Careers
Choosing Maas Technologies means partnering with a team of dedicated professionals who prioritize your success.
Our commitment to excellence, innovation, and customer satisfaction sets us apart in the competitive landscape.
Here’s why Maas Technologies is your ideal choice:
Role: Business Analyst
A Business Analyst is the successful candidate will join the
Information Systems Projects, where you'll be supporting the
ongoing delivery of technology-driven change, through the
investigation, specification, and design of high-quality systems,
processes and services.
You will also be supporting delivery across
the whole project life cycle. Working alongside other IS functions
and with the wider organisation, you will help deliver new and/or
changed IT systems and services.
You may be asked to work on
feasibility studies, options analyses, proposed solutions, gap
analyses and risk identification, or other project scoping/defining
activities, as well as working closely with Project Managers and
other colleagues across the full life cycle of change projects, from
developing the Business Case and gathering requirements, through
to testing, business acceptance and benefits realisation.
Main Responsibilities:
- Facilitating workshops with stakeholders to elicit business needs and
system requirements.
- Documenting 'As is' and 'To be' business processes and system
flows.
- Translating business requirements into system (software, platform),
process and service requirements.
- Effectively managing business stakeholder expectations throughout
the project life cycle.
- Working as the project-lead on the 'scope and define' phase of large
projects.
- Providing support and input to the overall definition of the system
and UAT test scripts.
- Provide implementation support as and when required, including
'cutover' planning, execution and end-user training.
Essential criteria
- You'll need to be proficient in using structured BA methodologies
and techniques, including business process modelling, to capture
and document requirements, including the "as-is" and "to be".
- You'll need to build and maintain strong relationships with people at
all levels. You'll need to make sure projects deliver their intended
benefits and that people are ready for the changes.
- You'll need to demonstrate strong experience including on medium
and large-scale projects that involve 3rd parties and technical
resources.
- You'll need to have experience leading on complex and thorough
analysis of system/operational issues before.
- You'll need to communicate complex information in a range of
formats and to different audiences.
- You'll need to engage people in our project work, be able to make
recommendations, and present arguments fluently and persuasively
to achieve goals.
Desirable criteria
- Possess a strong understanding of end-to-end project and software
development life cycles.
- Are comfortable creating Agile User Stories and Acceptance Criteria.
- Work closely with AI engineers to define end-to-end use cases
involving multi-agent AI systems.
- Understading of AI/ML development.
- Help orchestrate tools using OpenAI function calling, vector
databases.
- Have excellent Microsoft Office skills (including Visio, ideally with
experience of BPMN), and a high-degree of computer literacy.
- Have experience of describing/defining integrations between
platforms, through methods such as ETLs or APIs.
- Take a continuous improvement approach to your practices,
contributing to the review and introduction of industry best practice,
tools and techniques.
What We Offer:
Competitive salary: £45,000 – £50,000
Flexible hybrid work environment
Career development and growth opportunities
A friendly and collaborative work culture
Location:
London (Hybrid working available – a mix of office and remote work)
How to Apply
If you're ready to take your career to the next level and contribute
to our success, please send your resume and a cover letter to
[email protected]
Role: Data Engineer Job Type: Full Time
We are looking to fill the position of Data Engineer to help develop and maintain data
products. Data Engineering teams are responsible for the delivery and operational
stability of the data products built and provide ongoing support for those products. Data
Engineers work within, and contribute to, the overall data development life cycle process
as part of multi-functional Agile delivery teams focused on one or more products.
Main Responsibilities:
- Design, build, and operate simple, repeatable ETL data pipelines within distributed
processing environments and cloud platforms, as well as localized single-node
processing environments.
- Develop and produce prototyped then productionized code that can be deployed across
a range of ETL, data validation, and other data production processes.
- Develop understanding of the native tooling of one or more of: GCP, Azure, AWS.
- An intermediate or better level coding in one or more mainstream coding languages (ie,
Python, SQL, Java, Scala, and R), and critically review the code of other data engineers.
- Develop code for a range of data products including data matching, rule development,
scans, operational outputs.
- Participate in development and maintenance of in-house code libraries.
- Undertake unit testing to support common code development.
- Review business requirements to ensure they are clear and robust, and transform
requirements into reusable production-ready code and/or effective data models.
- Understand the key principles of database design and be able to resolve technical
problems in databases, data processes, data products, and services as they occur.
Initiate actions, monitor services, and identify trends to resolve problems.
- Apply correct techniques in normalizing data and building robust relational structures in
database products.
- Undertake basic data analysis, for example, for data profiling, QA, or problem resolution.
- Support queries from end users on data quality issues affecting data production;
communicate issues and blockers to develop solutions.
- Undertake source system analysis and data profiling to confirm data quality and ensure
accurate metadata.
- Understand relevant data sources, tools, and systems. Work with experts to develop
validation frameworks for both simple and complex data sources.
- Communicate with customers, for example, to update on progress or confirm
requirements.
- Using a working knowledge of cloud data engineering tools in assisting with the
development of data pipelines, products, and automation processes.
- Work with other team members in development of tool performance logging and
monitoring.
- Work with other team members in applying engineering team and community best
practice in data products and pipelines.
- Experience predicting and advising on technology changes in the engineering toolset(s)
and platform(s) you work on.
- Work with the team in implementing the transition to modern data platforms including
Data Warehouse, Lakehouse, across the data products, pipelines, and processes you
are responsible for.
- Engage with other professional communities (Data Science, Architecture, etc.) to identify
emerging and cross-community issues affecting your role and escalate as necessary.
- Describe technical, data, pipeline, and production issues to colleagues of different
specialisms.
- Communicate within the team and across teams to monitor expectations around delivery
of data engineering, products, and pipelines, blockers, priorities, and issues. Escalate
issues and blockers in delivery proactively.
- Familiar with developing data pipelines and products for very large volume 'big data'
series using a range of native engineering tools and practices and coding approaches.
- A working knowledge of engineering standards across a platform and native toolset.
Experience implementing these standards in the day-to-day role and keeping outputs up-
to-date with these.
- Typical Data Engineering Experience required (5+ years):
- Knowledge and experience of Azure/AWS Cloud data solution provision.
- Proficient in SQL
- Ability to develop and deliver complex visualisation, reporting and dashboard solutions
using tools like Power BI, Informatica
- Enterprise-scale experience with ETL tools (Informatica and or similar).
- Experience of data modelling and transforming raw data into datasets and extracts.
- Experience of working in a large project/scale complex organisation and knowledge of
migrating Legacy capabilities.
- Experience in Agile.
- Ability to analyse and collect information and evidence, identify problems and
opportunities, and ensure recommendations fit with strategic business objectives.
- Experience of building team capability through role modelling, mentoring, and coaching.
- Ability to manage relationships with non-technical colleagues and can work in a
collaborative, inclusive way.
- Ability to design, write, and operate ETL pipelines, in the context of distributed
processing, applying coding, data, and documentation standards, in the language
required by the business area.
- Understanding of the principles of data processing in a distributed and or cloud platform,
and ability to use this understanding to ensure robust coding in a distributed or cloud
environment.
- Able to write clean, efficient and well documented code for data processing tasks to a
specification.
- Able to use Git for code version control to pull and push and review merge requests for
team and own code.
- Ability to undertake simple data and code analysis for effective quality assurance and to
resolve processing issues.
- Ability to undertake simple data and code analysis for effective quality assurance and to
resolve processing issues.
- Experience of one or more programming/coding languages listed: Python/PySpark, SQL,
Proc SQL, NoSQL, MySQL, SQLite, Spark SQL, Hive SQL, PostgreSQL, SAS, SAS E-
guide, Scala, RegEx, Java, R
- Investigate problems in systems, processes and services, with an understanding of the
level of a problem, for example, strategic, tactical or operational.
- Undertake data profiling and source system analysis for data evaluation, issue resolution
or data standardisation.
- Use metadata repositories to complete complex tasks such as data and systems
integration impact analysis.
- Good knowledge of database structures, practices, principles of database integrity etc.
- Basic knowledge of applying database principles and SQL coding across a range of
platform database and data querying tools (ie SQL Server, Cloud SQL, Big Query, Hive,
Athena etc.)
- Show an awareness of opportunities for innovation with new tools and uses of data.
- Experience in more than one of the following tools is required for Engineers engaging in
BI development: Plotly, R Shiny, Tableau, QlikView/Qlik sense, Power BI, SAP, Business
Objects, MicroStrategy, Snowflake
- Experience of several of the following tools: NiFi, Hbase, Bash, Assist, Putty, Neo4J,
Spark, Kafka, HDFS, Oozie, Git Hub, Unix, Hadoop, Impala, DoJo, Flume, Elastic,
Logstash, Kibana, Airflow, Glue, Big Query, Athena, CML, Hive, Informatica, CuteFTP
Ability to explain and communicate technical concepts in non-technical language.
- Explain the types of communication that can be used with internal and external
stakeholders, and their impact.
- Design, build and test data products based on feeds from multiple systems, using a
range of different storage technologies, access methods or both.
- Able to explain and implement the concepts and principles of data modelling.
- Ability to create and run simple unit tests.
Desired Qualifications
- Certifications in AWS, Azure, Databricks, or related technologies.
- Experience with public sector data initiatives and compliance requirements.
- Knowledge of machine learning and artificial intelligence concepts.
What We Offer:
Competitive salary: £45,000 – £50,000
Flexible hybrid work environment
Career development and growth opportunities
A friendly and collaborative work culture
Location:
London (Hybrid working available – a mix of office and remote work)
How to Apply
If you're ready to take your career to the next level and contribute
to our success, please send your resume and a cover letter to
[email protected]